- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: nfs slow
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 12:45 AM
тАО12-13-2007 12:45 AM
nfs slow
make_net_recovery running, bdf command at the client is very slow to show the mounted file system from ignite server. also there are a lot of this message "vmunix: NFS server aaaaa not responding still trying". consequently ignite backup take a very long time to complete.
both server & client are 11.23. At the server 64 nfsd, at client 100 biod.
nfsstat at client:
root@vinhp29:(root)> nfsstat -m
/var/opt/ignite/recovery/client_mnt from aaaaa:/var/opt/ignite/clients (Addr ......)
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
/var/opt/ignite/recovery/arch_mnt from aaaaa:/var/opt/ignite/recovery/archives/bbbbb (Addr .....)
Flags: vers=3,proto=tcp,auth=unix,hard,printed,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
anyone can help?
- Tags:
- NFS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 01:14 AM
тАО12-13-2007 01:14 AM
Re: nfs slow
can you check the speet settings?
lanadmin -x
they have to be the same on the switch and on the system. alsow make sure the hp server is not Auto-Negotiation-ON unless it is 1000 FD.
are you makeing the ignite over a fire wall?
if you do not use the same supnet than a fire wall can slow you down.
- Tags:
- lanadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 01:26 AM
тАО12-13-2007 01:26 AM
Re: nfs slow
Speed = 100 Full-Duplex.
Autonegotiation = Off.
but not sure about switch setting becoz the our network guys cant access customer's switch.
any other thing can be changed?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 01:30 AM
тАО12-13-2007 01:30 AM
Re: nfs slow
Checklist.
Check network duplex lanscan,lanadmin
Check NFS server logs.
Check network for congestion.
System patch. Have something current on the system.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 03:43 AM
тАО12-13-2007 03:43 AM
Re: nfs slow
ps -ef |grep biod
ps -ef |grep nfs
if now 16 processed you can double this whit:
/usr/sbin/nfsd 32
biod 16
and sheck if it goes better,
is so also change /etc/rc.config.d/nsfconf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 08:00 AM
тАО12-13-2007 08:00 AM
Re: nfs slow
You positively MUST determine the corresponding ethernet switch port settings because having one end of the connection hard-set and the other end set to auto-negotiate is a sure way to cause mismatches.
If you cannot determine the switch port settings then I would assume auto-negotiate as that is the default for almost all switches.
BOTH ends MUST be set to auto-negotiate or BOTH ends MUST be set to the same hard-set values.
Surprisingly mismatched settings will almost work well and would not even be apparent in a low-activity session such as telnet or rlogin but would perform terribly in an application like ftp or NFS.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 06:40 PM
тАО12-13-2007 06:40 PM
Re: nfs slow
Please confirm you are not using auto_mount?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 06:45 PM
тАО12-13-2007 06:45 PM
Re: nfs slow
yes, looks like the speed setting needs to be changed. I will try that. thank you all
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2007 07:03 PM
тАО12-13-2007 07:03 PM
Re: nfs slow
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2007 01:01 PM
тАО12-14-2007 01:01 PM
Re: nfs slow
> both server & client are 11.23.
> At the server 64 nfsd, at client 100 biod.
You're using TCP for your NFS filesystems so the number of nfsds is irrelevant because those are only used for UDP requests. I would turn the number of nfsds on the server back to the default of 16 just in case there are any UDP clients still out there.
> at client 100 biod.
This is WWWWWWWWWWWAAAAAAAAAYYYYYYYYY too many biods to be running. I cannot imagine you have enough requests on the client to keep 100 biods busy - especially if all you have connecting the systems is a 100BT interface.
Also, there is a known problem of "thundering herds" in the 11.23 biod code path, so what I usually see is the more biods you run the worse your performance gets. For those of you unfamiliar with a thundering herd, it goes something like this:
1. Client application generates an NFS request
2. All 100 biods are woken
3. One of the biods is scheduled to do the request
4. The other 99 biods go back to sleep
Bottom line: with a thundering herd, the more contention there is the worse your performance gets.
Here is the set of kernel tunes I recommend to any NFS customer running 11.23:
_______________________________________
o nfs_async_read_avoidance_enabled
This tells the NFS client to issue READ calls even if all the biods are busy servicing WRITE calls
Default Setting: 0 (Disabled)
Recommended Setting: 1 (Enabled)
o nfs_fine_grain_fs_lock
By default (0), the NFS client code uses a global system-wide semaphore to control access to many routines and data structures. This use of a global semaphore leads to a lack of parallel activity through many of the main NFS client code paths. When set to 2, the client avoids all use of this global filesystem semaphore and uses finer grained locks to protect critical code paths and data structures. The result is a much higher performing NFS client.
Default Setting: 0 (Use FS Semaphore in all code paths)
Recommended Setting: 2 (Avoid FS Semaphore in all code paths)
o nfs_new_lock_code
By default (0) when an NFS client places a lock on a file we turn off the biods and buffer cache for this file, effectively making all access to the file synchronous. When enabled (1) the client will enable the biods and buffer cache on locked files if the entire file is locked.
Default Setting: 0 (Disabled)
Recommended Setting: 1 (Use biods and buffer cache on locked files)
o nfs_new_rnode_lock_code
This instructs the NFS client to allow processes waiting to lock an rnode (NFS version of an inode) on the NFS client to be interrupted by ^C. By default these processes sleep in the kernel at a non-interruptible state.
Default Setting: 0 (threads are not interruptible while waiting to lock an rnode)
Recommended Setting: 1 (threads are interruptible while waiting to lock an rnode)
o nfs_wakeup_one
There are a couple nasty thundering herd conditions in the NFS client code. By setting this tunable to 2 both of the thundering herd conditions are avoided and the CPU contention of the system is dramatically reduced as well as throughput increased.
Default Setting: 0 (both thundering herd conditions exist)
Recommended Setting: 2 (bypass both thundering herd conditions)
o nfs3_new_acache
By default (0) the NFS client uses a linear search when walking the list of credential structures associated with a given file or directory (i.e. all the users who want to look at a given NFS file or directory). When enabled (1), the NFS client uses a hashed algorithm which can greatly increase performance and reduce CPU overhead when many users attempt to access the same shared file or directory.
Default Setting: 0 (linear credential search)
Recommended Setting: 1 (hashed credential search)
_______________________________________
If you set the above tunables to the recommended values and decrease the biods back down to the default 16 I'd be curious if the performance improves.
Regards,
Dave
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]