- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Need for TCP stack buffer tuning?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2005 01:54 AM
02-01-2005 01:54 AM
on an apart from memory otherwise lowly loaded rp7410 running 11.11 I seem to run short of buffer space of the TCP stack.
E.g. for today there only occured 6 minor network bottlenecks, accumulatin
# utility -xa -D -b $(date +%x) 06:00 -e today|tail -20
*******************************************************************************
Performance Alarm Summary:
Alarm Count Minutes
1 0 0
2 1 10
3 0 0
4 6 35
5 0 0
6 0 0
Analysis coverage using "/var/opt/perf/alarmdef":
Start: 02/01/05 06:00 Stop: 02/01/05 14:50
Total time analyzed: Days: 0 Hours: 8 Minutes: 50
*******************************************************************************
Maybe I should note that this box also acts as an NFS server and client (socalled loopback mounts) alike that uses RPC on TCP.
I needed to upgrade to Enhanced AutoFS to have TCP transport supported because of NFS performance issues back then.
However, the NFS stats are ok, and the NFS performance hogs have disappered since then.
# nfsstat -sr
Server rpc:
Connection oriented:
calls badcalls nullrecv
7808520 0 0
badlen xdrcall dupchecks
0 0 1854451
dupreqs
0
Connectionless oriented:
calls badcalls nullrecv
109687 0 0
badlen xdrcall dupchecks
0 0 0
dupreqs
0
But back to my topic.
The SSH daemon seems short of (sock) buffer space whenever a client connects.
At least I got these messages in syslog.log repeatedly
# grep -c 'sshd.*No buffer' /var/adm/syslog/syslog.log
336
# grep 'sshd.*No buffer' /var/adm/syslog/syslog.log|tail -1
Feb 1 12:53:04 somehost sshd[29155]: error: accept: No buffer space available
While searching for tunables of the TCP stack I came accross this Knowledge Base record:
http://www5.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&docId=200000071549498
My question now is, would it be an appropriate meassure to increase for instance the tcp_recv_hiwater_def tunable?
Or would it decrease the available memory for user processes?
If increase of these ndd tunables was feasible,
what would be recommended water marks for an application like SSH?
Regards
Ralph
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 02:20 AM
02-02-2005 02:20 AM
Re: Need for TCP stack buffer tuning?
You might want to read:
http://docs.hp.com/en/1219/tuningwp.html
What version of sshd are you using? There were a lot of buffer overflow problems in some of the earlier versions. How do you start it?
Ron
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 03:01 AM
02-02-2005 03:01 AM
Re: Need for TCP stack buffer tuning?
from the NIC MIB stats that I attached to this post you can see that there were only 4021 queue overruns since last boot.
Compared to the total packet counts this is more than negligable.
However, although still being as little as abt. 0.06 permille related to in-packets, the number of keepalive timeouts is magnitudes higher.
Because these figures seem to have mainly been generated by SSH, and since there are only a few SSH connections per day their significance is irrelevant.
However the syslog error messages from SSH are a bit annoying.
Before I had those reported SSH buffer overruns there where only those
"setsockopt SO_KEEPALIVE: Invalid argument" messages in syslog.log also coming from SSH.
To cease them I set this option in sshd_config:
# grep Keep /opt/ssh/etc/sshd_config
TCPKeepAlive no
From then on those setsockopt() syscall errors disappeared, and the "No buffer space" errors popped up.
Maybe you're right and this is only owe to a bug in the HP Secure Shell port?
On the box where the errors appear I have this SSH version running:
# swlist|grep -i secure
T1471AA A.03.91.002 HP-UX Secure Shell
# /opt/ssh/sbin/sshd -v
sshd: illegal option -- v
OpenSSH_3.9, OpenSSL 0.9.7d 17 Mar 2004
HP-UX Secure Shell-A.03.91.002, HP-UX Secure Shell version
Usage: sshd [options]
Options:
-e Redirect output to standard error instead of the system log.
-f file Configuration file (default /opt/ssh/etc/sshd_config)
-d Debugging mode (multiple -d means more debugging)
-i Started from inetd
-D Do not fork into daemon mode
-t Only test configuration file and keys
-q Quiet (no logging)
-p port Listen on the specified port (default: 22)
-k seconds Regenerate server key every this many seconds (default: 3600)
-g seconds Grace period for authentication (default: 600)
-b bits Size of server RSA key (default: 768 bits)
-h file File from which to read host key (default: /opt/ssh/etc/ssh_host_key)
-u len Maximum hostname length for utmp recording
-4 Use IPv4 only
-6 Use IPv6 only
-o option Process the option as if it was read from a configuration file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 04:29 AM
02-02-2005 04:29 AM
SolutionAn ENOBUFS on accept() is slightly misleading - it means that by the time the application got around to calling accept() the client had given-up on the connection and terminated it.
Altering tcp_recv_hiwater_def is not going to change anything there. IIRC, I've seen some reports of the messages (although perhaps not the client behaviouf) going away if one disables the early connection indication:
ndd -set /dev/tcp tcp_early_conn_ind 0
but all else being equal, there really is no "error" to speak of when accept() returns ENOBUFS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 05:14 AM
02-02-2005 05:14 AM
Re: Need for TCP stack buffer tuning?
The 4021 connect requests dropped due to full queue does have a fix in ndd if you want to bother with it. Read the help for these two:
/usr/sbin/ndd -h tcp_syn_rcvd_max
/usr/sbin/ndd -h tcp_conn_request_max
Do you always get a lot of connections at one time? Is the box short on memory? Slow processing something? Usually there is a reason why things get backed up and it's not the tcp/ip stack but something further up the food chain.
Would rather have seen the whole netstat -s file tho.
Ron
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 05:34 AM
02-02-2005 05:34 AM
Re: Need for TCP stack buffer tuning?
I've never needed to increase the tcp_syn_rcvd_max (sp) myself, but then most of my testing has been with rather short RTTs. That, and the default is 500 rather than 20 like tcp_conn_request_max :) Certainly it should not hurt to increase it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2005 10:03 PM
02-02-2005 10:03 PM
Re: Need for TCP stack buffer tuning?
many thanks for your valuable comments that shed some light for me.
I'm really lucky that my posting received attention by our ITRC HP STREAMS HIGHNESS Rick Jones.
After having read his remarks on the ENOBUFS error I revisited the manpage of accept().
One has to concede that the explanation to this error therein really is a bit misleading, or opaque.
[ENOBUFS] No buffer space is available. The accept() cannot
complete. The queued socket connect request is
aborted.
The "No buffer space is available" doesn't say that this would also apply if the other end of the socket already has given up.
Thanks to socket hackers like Rick who let us peek behind the scenes.
Meanwhile I had a look at the mentioned tunable
# ndd -h tcp_conn_request_max
tcp_conn_request_max:
Maximum number of outstanding inbound connection requests.
[1, - ] Default: 20 connections
# ndd -get /dev/tcp tcp_conn_request_max
4096
Woah, this is way beyond the default 20 conns.
I cannot remember to have raised this one myself.
This must have been done by the SAP implementers on this box
(n.b. the application is a cluster Oracle/SAP DB server)
As for the other tunable, which as far as I understand the help text correctly, seems to be a security throttle to fend off too many SYNs like in a SYN flood.
# ndd -h tcp_syn_rcvd_max
tcp_syn_rcvd_max:
Controls the SYN attack defense of TCP. The value specifies
the maximum number of suspect connections that will be allowed
to persist in SYN_RCVD state. For SYN attack defense to work,
this number must be large enough so that a legitimate connection
will not age out of the list before an ACK is received from the
remote host. This number is a function of the speed at which
bogus SYNs are being received and the maximum round trip time
for a valid remote host. This is very difficult to estimate
dynamically, but the default value of 500 has proven to be highly
effective. [1,10000] Default: 500 connections
This one is set so
# ndd -get /dev/tcp tcp_syn_rcvd_max
500
which is the default settings.
Sorry Ron, for suppressing half of the netstats.
Attached you may find the netstats for all protocols
Rick, should I attach tusc to the PID
(with branching for children) of the sshd and watch what's going on when a client connects?
Although the online reference of the HP-UX ndd tunables is very good, is there a tuning
howto for the HP IP/TCP stack of the likes for e.g. Solaris?
http://www.sean.de/Solaris/soltune.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2005 03:49 AM
02-03-2005 03:49 AM
Re: Need for TCP stack buffer tuning?
meanwhile in search for further explanation I've found your excellent ndd Annotations
ftp://ftp.cup.hp.com/dist/networking/briefs/annotated_ndd.txt
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2005 05:48 AM
02-03-2005 05:48 AM
Re: Need for TCP stack buffer tuning?
WRT the tusc trace, what you want to see is the setup of the listen socket. So, you want to tusc the initial launch of the sshd, and not when a client connects.
As for the accept manpage, by all means feel free to file a defect against it. I would.