- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- nfs slowness
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2009 01:44 PM
11-16-2009 01:44 PM
nfs slowness
We ran various tests and isolated it is not a network bandwidth.
Case 1: NFS file system is mounted on /a/b/c/d mount point. It is being accessed heavily by application. It takes 4 mins 36 secs to copy xyz file (=220mb) from this nfs FS to /tmp.
Case 2: Same nfs file system is mounted on /test. It takes 1:56-2 mins to copy the same file to /tmp.
I did use glance/tusc for both the copies. Glance showed process is blocked on SYSTM for case 1 and Blocked on CACHE for case 2.
Tusc showed only read & write calls for both tests.
For doing line count on same file, case 1 took 14 mins & case 2 took only 3 mins.
What could be causing slowness on case 1? What are the kernel parameters we need to tune to improve the performance?
System config : 11.11 / 16 cpu / 40 g / 2g buffer cache
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2009 02:03 PM
11-16-2009 02:03 PM
Re: nfs slowness
Possible slowness factors:
1) Link speed.
2) NFS parameters. nfsstat to see details
3) Over use of the nfs mount.
4) NFS patches may be required.
Tuning doc:
http://docs.hp.com/en/1435/NFSPerformanceTuninginHP-UX11.0and11iSystems.pdf
Dave Olker, the author is the absolute guru on this topic.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2009 03:10 PM
11-16-2009 03:10 PM
Re: nfs slowness
So it looks like you have and application problem or did I miss something? Two file system, same size. One with application mounted and running on it, the other has nothing to speak of.
So this would not be an application issue but a contention for resouses on the first file system.
If so then you need to use 'lsof' and see what is being written too. Start with the biggest consumers and lsof their pids.
UNIX95=1 ps -ef -o vsz,pid,ppid,state,wchan,agrs | sort -rn | head -15
UNIX95=1 ps -ef -o pcpu,pid,ppid,state,wchan,agrs | sort -rn | head -15
Note the pids, their state and what they are waiting on (* whchan *). Then
lsof -p pid
lsof -p ppid
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-16-2009 03:11 PM
11-16-2009 03:11 PM
Re: nfs slowness
So this would not be an NFSissue but a contention for resouses on the first file system.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 10:34 AM
11-17-2009 10:34 AM
Re: nfs slowness
Link is good , nfsstat we don't see abnormal things.
We could install latest nfs patch but we are not very behind to the current one.
I am more towards tuning the kernel according to Dave Olker guide. There are few parameters left at default values.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 10:39 AM
11-17-2009 10:39 AM
Re: nfs slowness
Thanks for the commands.
I could get wait channel and print some hex #s from kernel. Don't know how to interpret it.
You are correct I feel the same way , there must be resources contention on nfs FS.
Will update if kernel tuning helps.
We just opened case with HP also.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 10:44 AM
11-17-2009 10:44 AM
Re: nfs slowness
The first thing I would look at if two NFS filesystems are behaving differently on the same client are the mount options.
While both filesystems are mounted can you issue the command: "nfsstat -m" and post the ouput here?
Thanks,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 12:50 PM
11-17-2009 12:50 PM
Re: nfs slowness
Thanks for replying.
Mount options are same.
/dataso/claims/1.0/shared from netapp31:/vol/wisp/shared (Addr 10.x.x.x)
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
/prdata from netapp31:/vol/wisp/shared (Addr 10.x.x.x)
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
# cd /prdata/prdata
# timex cp medven /tmp
real 3:13.17
user 0.01
sys 3.82
# cd /dataso/claims/1.0/shared/prdata
# timex cp medven /tmp
real 5:21.92
user 0.01
sys 3.27
# cd /prdata/prdata
# timex wc -l medven
559130 medven
real 2:27.18
user 2.87
sys 2.54
# cd /dataso/claims/1.0/shared/prdata
# timex wc -l medven
559133 medven
real 15:14.33
user 2.90
sys 3.46
Kernel values are attached.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 12:53 PM
11-17-2009 12:53 PM
Re: nfs slowness
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 12:55 PM
11-17-2009 12:55 PM
Re: nfs slowness
Kernel values are not going to be different between the two file systems, and 99% certainly not the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 01:44 PM
11-17-2009 01:44 PM
Re: nfs slowness
Odds are your problem is arising from the bandwidth-delay product of your WAN link. This is the round-trip time of the link multiplied by the number of bytes per second it can send.
A long, fat pipe can result in very poor performance of large-volume TCP data transfer, which is what an NFS mount basically is, because of the size of the TCP transmit and receive windows.
In most HP-UX systems, the default window is 32k.
So, for example, say you have a T3 link at 44,736,000 bits per second or 5,592,000 bytes per second, with a round-trip time of 40 milliseconds between your NFS client and server.
This means that the link can transmit as much as 223,680 bytes (5,592,000 bytes * 0.040 seconds) before it's even physically possible for the local system to get the first acknowledgment from the remote system.
With a TCP transmit window set to 32k, then only 32k is sent in 5.85 milliseconds, and the other 34.15 milliseconds is spent waiting for the first ACK packet to come back.
You may see this referred to as packets "in flight," and it's a good way to visualize the situation.
The key to improving performance in this situation is to adjust the TCP transmit and receive windows to fit the capabilities of the WAN link.
The default window sizes are set in /dev/tcp: tcp_xmit_hiwater_def, and _recv_
In the above example, if I cranked the transmit and receive windows up to 256k:
ndd -set /dev/tcp tcp_xmit_hiwater_def 262144
ndd -set /dev/tcp tcp_recv_hiwater_def 262144
... then instead of sending 32k and waiting 34 milliseconds for the acknowledgment, it'll send 223,680 bytes in 40 milliseconds and then start getting acknowledgments back to drain out the buffer as it continues to transmit.
(I think there's also a case to be made for making the window size an even multiple of the maximum segment size of 1500 bytes, but HP doesn't do it, and memory is cheap.)
There's other parameters in /dev/tcp you'll notice - _lfp (long fat pipe) and _lnp (long narrow pipe), which allow the system to set different TCP windows for different types of links, but as I recall that applies only to interfaces, so if your router is handling the WAN link, not the HP system, then it's not triggered. You may need to investigate that further if you don't want the larger window to apply to every single TCP connection that the system makes. Memory's cheap nowadays, anyway.
Now, if you bump the TCP windows up like this there's a couple more settings you will probably want to change:
tcp_sack_enable = 1
tcp_ts_enable = 1
The "SACK" feature is "Selective ACK" from RFC-2018 - if some packets are lost, but not all, the receiver can acknowledge all the packets it has received, rather than just the highest in the continuous sequence, allowing the sender to fill in the gap without re-sending all the packets past the gap which were successfully received.
The "TS" feature is "timestamps" - since sequence numbers only go to 64k, it may be possible for them to wrap around when larger windows are in use. The timestamp will differentiate two packets with the same sequence number.
Setting these two parameters to "1" means that the system will both offer and accept these features. "2" means that the system will accept, but not offer.
To preserve these settings across reboot, update the /etc/rc.config.d/nddconf file:
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_xmit_hiwater_def
NDD_VALUE[0]=262144
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_recv_hiwater_def
NDD_VALUE[1]=262144
TRANSPORT_NAME[2]=tcp
NDD_NAME[2]=tcp_ts_enable
NDD_VALUE[2]=1
TRANSPORT_NAME[3]=tcp
NDD_NAME[3]=tcp_sack_enable
NDD_VALUE[3]=1
Let me know how fast your NFS mount goes after you do this. I think both your /test and your /a/b/c/d times will decrease noticeably, depending, of course, on your actual bandwidth/delay product.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 01:55 PM
11-17-2009 01:55 PM
Re: nfs slowness
This means that you'll need to unmount and remount the NFS filesystem to cause it to use the new settings, and this can sometimes be challenging to do without a reboot, as I'm sure you know.
You'll also want to change the TCP window size settings on the NFS server as well, I expect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 01:55 PM
11-17-2009 01:55 PM
Re: nfs slowness
The ndd parameters, like the kernel parameters would apply to both file systems.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 02:03 PM
11-17-2009 02:03 PM
Re: nfs slowness
Good point, though - both mountpoints would be sharing the same TCP connection to the NFS server, I reckon.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 02:05 PM
11-17-2009 02:05 PM
Re: nfs slowness
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 02:46 PM
11-17-2009 02:46 PM
Re: nfs slowness
Application doesn't reside in nfs FS , only data is.
mypel, I could read the same file faster from other mount using same WAN link.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2009 03:41 PM
11-17-2009 03:41 PM
Re: nfs slowness
Ok
"...It is being accessed heavily by application. It takes 4 mins 36 secs to copy xyz file ..."
I no longer know at this point.