unix domain socket buffer size ?

비정기 기여자

unix domain socket buffer size ?

ipc중 domain socket을 사용합니다.

domain socket buffer를 크게 할수있는 방법이 궁금하군요

ndd 중에 혹시 있나요
1 응답 1
wang gi kim
임시 조언자

unix domain socket buffer size ?

domain buffer size라고 하니 좀 이해하기 힘들군요.

혹 tcp send/received buffer size를 얘기 하시는건 아니신지요.

(setsockopt(), getsockopt())

SO_SNDBUF = send buffer high-water mark(tcp_xmit_hiwater_def)

SO_RCVBUF = receive buffer high-water mark(tcp_recv_hiwater_def



Every single TCP connection has a special amount of memory

allocated to store the packets. The application reads from

this buffer and TCP writes to it.

If the application does not read fast enough, the buffer gets

filled and TCP is no longer able to write to it.

This tunable changes the amount of buffer space that is

available for default tcp connections (more on this later).

It is meant for local LAN type (ie. ethernet, fddi, token)


There are two other tunables which are similar but for other

media types.

tcp_recv_hiwater_lfp - is responsible for the Long, Fat Pipes,

this means fast connections like Fiber


tcp_recv_hiwater_lnp - regulates the Long, Narrow Pipes, which

are slow connections like PPP.

This value is given in bytes.

The minimum is 4096 bytes and no maximum limit is defined.

The default is set to 32K (32768).

Why should this be changed?

If the application is not fast enough to read all the

information from the buffer, and memory is available,

performance can be improved by increasing this value.

If the machine has depleted most of its available memory to

several tcp connections that are writing into the buffers,

it could be useful to decrease this tunable to limit the memory

usage to a bearable amount.

Usable commands:

Check the current value:

ndd -get /dev/tcp tcp_recv_hiwater_def

Set the window size to 64K (65536):

ndd -set /dev/tcp tcp_recv_hiwater_def 65536

nddconf entry example:






For every TCP connection a buffer is allocated.

The application writes into this buffer and TCP is responsible

for sending it to the distant host. Sometimes it happens that

the other host is not able to receive further data, so TCP can

not send more data out on the interface.

In this case the allocated buffer fills up and at one point we

reach a limit where we must stop the application from sending

more data to the buffer.

This higher limit is called the high-water mark. We prevent the

application from sending any further data until TCP is able to

send enough packets out so that we reach another lower limit the

tcp_xmit_lowater_def which again allows the application to write

data into the buffer.

Since different connections could differ in their speed of

filling up this limit, we have two other high-water marks:

tcp_xmit_hiwater_lfp is for fast connection, whereas

tcp_xmit_hiwater_lnp is for slow connections.

Also the low-water mark has two other equivalents for fast

connections it is tcp_xmit_lowater_lfp and for the slow

connections it is tcp_xmit_lowater_lnp.

This value is given in bytes.

The minimum is 4096, there is no defined maximum.

The default is set to 32K (32768).

Why should this be changed?

Normally it is not necessary to change this value, but if the

connections are faster than expected you could increase it or if

they are slower decrease them.

Usable commands:

Check the current value:

ndd -get /dev/tcp tcp_xmit_hiwater_def

Set the high-water mark to 64K:

ndd -set /dev/tcp tcp_xmit_hiwater_def 65536

nddconf entry example:




happy new year!