Showing results for 
Search instead for 
Did you mean: 

SO_RCVBUF performance

Occasional Contributor

SO_RCVBUF performance

We are seeing some strange performance issue on hpux talking to another unix system – and I was hoping someone could help us understand this a bit better.

Here’s what we’re seeing – on the client(hpux) we’re setting the tcp/ip receive buffer size (SO_RCVBUF) to 262,144.
On the other system, we’re setting the SO_SNDBUF to 61,440 (setting it to 262,144 also does not make a difference in this test).

For each hpux client request we get a record back. With these rcv/snd buffer settings, we’ve noticed that when the record size is 51608 bytes, it was 8 times slower than when the record size is 51752. (fetching a million records)

Then on the hpux, we changed the SO_RCVBUF to 32,768 (default).

With that change I got the inverse numbers. i.e. with record size 51608 it was 8 times faster than the one with record size 51752.

The difference between these two record sizes is only 144 bytes – so there is some number between 51608/51752 which affects the performance drastically. Not sure what that number is and why there would be such a big difference based on the SO_RCVBUF size.

Is there some other setting that we should also be looking at ?
rick jones
Honored Contributor

Re: SO_RCVBUF performance

I would be inclind to first ask about things like:

*) What sort of Unix is the other unix system?

*) Can you reproduce the behaviour with a netperf TCP_RR test using suitable values for -S, -s and -r?

*) TCP retransmissions during the test. Grab beforeafter from

netstat -s tcp > before

netstat -s tcp > after
beforeafter before after > delta
then look at delta - they will be the stats from the period when the test was running

Should look at both ends since both ends are sending...

*) tcpdump packet traces of each case - which include capturing the connection establishment to see things like the MSS exchange, window scaling, etc.
there is no rest for the wicked yet the virtuous have no pillows