Operating System - HP-UX
1833758 Members
2694 Online
110063 Solutions
New Discussion

socket overflow parameter from netstat -s

 
Marc Bohnert
Advisor

socket overflow parameter from netstat -s

I am running HPUX 11.0. Wneb I do a netstat -s I see under the udp section: socket overflows = 1256.

What does this indicate and how do I fix it?


Thank you
3 REPLIES 3
Jennifer Chiarelli
Regular Advisor

Re: socket overflow parameter from netstat -s

It indicates not enough sockets available. If you do a search for "socket overflow" you will find documents describing fixes. In the meantime it appears that you need to increase 2 kernel parameters (re-boot is required to do so). The parameters are "nfile" and "maxfiles".
ARPA related patches may also help! Also, the nettune command may be needed depending on your system requirements (explained in documents found by doing a search).

Best reguards!
It's a binary world!
Vincenzo Restuccia
Honored Contributor

Re: socket overflow parameter from netstat -s

Try s700_800 11.00 cumulative ARPA Transport patch(PHNE_21767).
Brian Hackley
Honored Contributor

Re: socket overflow parameter from netstat -s

Marc,

HP, like other flavors of UNIX, does not have a tool available to measure which UDP nor TCP socket encountered a buffer overflow.

Unless you are monitoring e.g. via a cron job that appends to a file every N minutes, you have no way of knowing if these overflows occur slowly, or all in bursts.

However, with all that said, The most TYPICAL cause of socket overflows in UDP are from an NFS Server that is getting more requests than it can handle. Are there NFS Clients mounting from this system? Are the users complaining or noticing an abnormal number of NFS timeouts in nfsstat -rc from THIS NFS Server? If not, then you could probably just monitor the situation. Or perhaps another UDP application is the cause. USe netstat -an | grep -i udp and examine the UDP services presently in use.

Going back to the netstat -s Socket overflow number, if you see the requests increasing gradually over time, then you might want to increase the # of nfsd's configured in /etc/rc.config.d/netconf, in the NUM_NFSD variable. Default value of 4 is way too small for most environments. 64 is a good starting point if you are sharing NFS to clients.

There also might be a disk bottleneck. Check the /etc/exports file and compare the disk performance in GLANCE or sar -d 1 60.

I hope this helps you narrow down this problem,

-> Brian Hackley
Ask me about telecommuting!