- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: NFS performance issues
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 05:37 AM
02-11-2003 05:37 AM
I have a bunch of (replicated?) SAP R/3 servers (HP-UX 11i).
The catch with those SAP boxes is that R/3 imposes vivid data exchange between them over automounted nfs mounts (so called transports).
What even tops this ludicroucy is that the production instance is an MC/CG clustered system with an additionally installed SAP toolkit that takes responsibilty of these silly nfs export during package state changes (I guess similar to the MC/SG NFS toolkit).
Sometimes the cluster nfs dependencies even reveal as a curse from a sysadmininstrative point of view.
But now the users of R/3 complain about bad performance while transports are running.
To me the client-wise automounted nfs shares from the production cluster seem to be the culprit.
But how can I identify the hog, and after having done so, what can I do to overcome this without necessarily breaking with the somewhat insane nfs transport philosophy of SAP?
I read somewhere that on the nfs clients I should observer timeouts, retrans, and badxids as displayed from the "nfsstat -rc" command
Another thing that comes to my mind.
Since we already are forced to use nfs mounts anyway, why introduce an additional latency with each automount?
Wouldn't it be better to deactivate the automounter, and rather use steady nfs mounts?
Or would it be wiser to just increase the default mount intervalls (-tm)?
Apart, would it make sense to decrease the read and write caches through the nfs mount options -rsize and -wsize?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 06:13 AM
02-11-2003 06:13 AM
Solutionfor NFS performance tuning have a look at this document by Dave Olker:
http://www.docs.hp.com/hpux/onlinedocs/1435/NFSPerformanceTuninginHP-UX11.0and11iSystems.pdf
Regards,
Jochen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 06:26 AM
02-11-2003 06:26 AM
Re: NFS performance issues
One thing that will bite you on transports (and especially Support Packages using SPAM) is the number of background processes configured.
Please ensure you have at least 3-4 background processes per Application Server, or alternatively apply the transport direct on the Database Server. Otherwise the system does 'thrash' a little bit trying to get all Background jobs done with only a few processes.
Share and Enjoy! Ian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 06:28 AM
02-11-2003 06:28 AM
Re: NFS performance issues
Now, for the tuning...
I'd recommend that you change a few things, namely the rsize and wsize options. I dont remember off the top of my head the default buffer size (and at a site where I cant get to a HP).
Normally, I reduce greatly the buffer size, which means that data transfers quickly with less dead space in packets.
Look at changing the mount options for the required file systems, and reducing both the read and write buffer size to 8192.
Also, increasing the number of nfsd's and biod's on the client and server respectively can help. These values are changed in the /etc/rc.config.d/nfsconf file.
NOTE: backup any of the boot config files before making changes just in case you oops.
Regards,
Shannon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 06:53 AM
02-11-2003 06:53 AM
Re: NFS performance issues
many thanks for the link.
I think this document will shed more light to the nfs mysteries and raise my understanding of nfs.
Ian,
does your remark about the propper configuration of bg procs relate to anything SAP-wise?
I have to confess to have no SAP knowledge, and hope our SAP guys can make something with your hint.
Or was something meant that could be tampered with from a Unix sysadmin perspective?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 06:58 AM
02-11-2003 06:58 AM
Re: NFS performance issues
NFS performance usually is not that simple. You have to find out what the bottleneck is.
Is it the server? Is it responding too slowly? If so, what is causing this? Are there enough nfsd's? Are they really running and not swapped out? Check with
$ export UNIX95=1
$ ps -C nfsd -o comm,flags,pcpu
COMMAND F %CPU
nfsd 0 0
nfsd 0 0
nfsd 0 0
nfsd 0 0
...
If the flag (F) is 0 then the nfsd's are currently not in memory! Then the server has not enough memory / there's a memory leak / database is sized to large / ...
Other useful commands are
$ netstat -s # on client and server
$ nfsstat -s # on server
$ nfsstat -c # on client
$ nfsstat -m # on client
But most of this should be covered in Dave Olkers document.
Regards,
JOchen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 07:01 AM
02-11-2003 07:01 AM
Re: NFS performance issues
Should they be set in /etc/fstab of the server?
SERVER:
In my case the rsize/wsize=8192 as shown by nfsstat -m shows that the few directories from other machines I mount on the HP server, (they come off a linux machine and one other HP Visualize station running 11.11) are all rsize/wsize=8192.
CLIENT:
When I go to the client, and do the nfsstat -m I see that the mounted dirs (in this case, all from the HP server) don't have matching rsize/wsize:
Directories mounted from the client side portion /etc/auto.direct file are
rsize/wsize=32768
Directories mounted by the NIS pushed portion of the auto.direct file are rsize/wsize=32768.
Any directory mounted from a NIS pushed /etc/auto.home is rsize/wsize=8192
All these dirs come from the same HP server when on the client.
--------------
Sorry if I'm out of place or if I muddied the waters here by jumping in with my issues...but this seemed like place to ask.
Where exactly is the rsize/wsize controlled? Is it an NFS (server) setting or a VxFS setting at mount time?
Ian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 07:01 AM
02-11-2003 07:01 AM
Re: NFS performance issues
Jochen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2003 07:13 AM
02-11-2003 07:13 AM
Re: NFS performance issues
wsize and rsize are mount options, so client side. You can change these options in the auto_master normally. Just add the args
rsize=8192,wsize=8192
Similarly, in the mount options of the fstab file, the same args can be added.
On strict file servers, I usually reduce the buffer size to 4096.
On servers, I usually run 32-64 nfsd's, depending on how many clients they have (normally I go 2 clients per nfsd). On the clients, I run 8-16 biod's depending on how much data is mounted. If I have data and apps, then 16 biod's. Just data, then 8. The default for both is 4 in HP-UX.
Regards,
Shannon