- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- SO_RCVBUF performance
Operating System - HP-UX
1753706
Members
4641
Online
108799
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2010 03:38 PM
09-23-2010 03:38 PM
SO_RCVBUF performance
We are seeing some strange performance issue on hpux talking to another unix system – and I was hoping someone could help us understand this a bit better.
Here’s what we’re seeing – on the client(hpux) we’re setting the tcp/ip receive buffer size (SO_RCVBUF) to 262,144.
On the other system, we’re setting the SO_SNDBUF to 61,440 (setting it to 262,144 also does not make a difference in this test).
For each hpux client request we get a record back. With these rcv/snd buffer settings, we’ve noticed that when the record size is 51608 bytes, it was 8 times slower than when the record size is 51752. (fetching a million records)
Then on the hpux, we changed the SO_RCVBUF to 32,768 (default).
With that change I got the inverse numbers. i.e. with record size 51608 it was 8 times faster than the one with record size 51752.
The difference between these two record sizes is only 144 bytes – so there is some number between 51608/51752 which affects the performance drastically. Not sure what that number is and why there would be such a big difference based on the SO_RCVBUF size.
Is there some other setting that we should also be looking at ?
Here’s what we’re seeing – on the client(hpux) we’re setting the tcp/ip receive buffer size (SO_RCVBUF) to 262,144.
On the other system, we’re setting the SO_SNDBUF to 61,440 (setting it to 262,144 also does not make a difference in this test).
For each hpux client request we get a record back. With these rcv/snd buffer settings, we’ve noticed that when the record size is 51608 bytes, it was 8 times slower than when the record size is 51752. (fetching a million records)
Then on the hpux, we changed the SO_RCVBUF to 32,768 (default).
With that change I got the inverse numbers. i.e. with record size 51608 it was 8 times faster than the one with record size 51752.
The difference between these two record sizes is only 144 bytes – so there is some number between 51608/51752 which affects the performance drastically. Not sure what that number is and why there would be such a big difference based on the SO_RCVBUF size.
Is there some other setting that we should also be looking at ?
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2010 03:38 PM
09-24-2010 03:38 PM
Re: SO_RCVBUF performance
I would be inclind to first ask about things like:
*) What sort of Unix is the other unix system?
*) Can you reproduce the behaviour with a netperf TCP_RR test using suitable values for -S, -s and -r?
*) TCP retransmissions during the test. Grab beforeafter from ftp://ftp.cup.hp.com/dist/networking/tools
netstat -s tcp > before
netstat -s tcp > after
beforeafter before after > delta
then look at delta - they will be the stats from the period when the test was running
Should look at both ends since both ends are sending...
*) tcpdump packet traces of each case - which include capturing the connection establishment to see things like the MSS exchange, window scaling, etc.
*) What sort of Unix is the other unix system?
*) Can you reproduce the behaviour with a netperf TCP_RR test using suitable values for -S, -s and -r?
*) TCP retransmissions during the test. Grab beforeafter from ftp://ftp.cup.hp.com/dist/networking/tools
netstat -s tcp > before
netstat -s tcp > after
beforeafter before after > delta
then look at delta - they will be the stats from the period when the test was running
Should look at both ends since both ends are sending...
*) tcpdump packet traces of each case - which include capturing the connection establishment to see things like the MSS exchange, window scaling, etc.
there is no rest for the wicked yet the virtuous have no pillows
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP