BladeSystem Forums have moved here
To make BladeSystem information easier to find, we have moved the BladeSystem forums here, to Servers and Operating Systems.
Showing results for 
Search instead for 
Do you mean 

nfs slow

Frequent Advisor

nfs slow

Hi all,
make_net_recovery running, bdf command at the client is very slow to show the mounted file system from ignite server. also there are a lot of this message "vmunix: NFS server aaaaa not responding still trying". consequently ignite backup take a very long time to complete.

both server & client are 11.23. At the server 64 nfsd, at client 100 biod.

nfsstat at client:

root@vinhp29:(root)> nfsstat -m
/var/opt/ignite/recovery/client_mnt from aaaaa:/var/opt/ignite/clients (Addr ......)
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)

/var/opt/ignite/recovery/arch_mnt from aaaaa:/var/opt/ignite/recovery/archives/bbbbb (Addr .....)
Flags: vers=3,proto=tcp,auth=unix,hard,printed,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)

anyone can help?
Esteemed Contributor

Re: nfs slow

can you check the speet settings?
lanadmin -x
they have to be the same on the switch and on the system. alsow make sure the hp server is not Auto-Negotiation-ON unless it is 1000 FD.
are you makeing the ignite over a fire wall?
if you do not use the same supnet than a fire wall can slow you down.
Frequent Advisor

Re: nfs slow

root@aaaaaa:(root)> lanadmin -x 3
Speed = 100 Full-Duplex.
Autonegotiation = Off.

but not sure about switch setting becoz the our network guys cant access customer's switch.

any other thing can be changed?
Exalted Contributor

Re: nfs slow



Check network duplex lanscan,lanadmin
Check NFS server logs.
Check network for congestion.
System patch. Have something current on the system.

Steven E Protter
Owner of ISN Corporation
Esteemed Contributor

Re: nfs slow

you can enlarge the biod/nfsd processes:
ps -ef |grep biod
ps -ef |grep nfs
if now 16 processed you can double this whit:
/usr/sbin/nfsd 32
biod 16

and sheck if it goes better,
is so also change /etc/rc.config.d/nsfconf
Acclaimed Contributor

Re: nfs slow

Because you are running 11.23, your hardware is al least fairly modern and that means that you should probably not hard-set your network speed/duplex but rather leave it at auto-negotiate.

You positively MUST determine the corresponding ethernet switch port settings because having one end of the connection hard-set and the other end set to auto-negotiate is a sure way to cause mismatches.
If you cannot determine the switch port settings then I would assume auto-negotiate as that is the default for almost all switches.

BOTH ends MUST be set to auto-negotiate or BOTH ends MUST be set to the same hard-set values.

Surprisingly mismatched settings will almost work well and would not even be apparent in a low-activity session such as telnet or rlogin but would perform terribly in an application like ftp or NFS.
If it ain't broke, I can fix that.
Valued Contributor

Re: nfs slow

I too would say to check the duplex settings. Auto neg would be a good thing.

Please confirm you are not using auto_mount?

Baldric, I have a plan so cunning you could pin a tail on it and call it a weasle.
Frequent Advisor

Re: nfs slow

autofs=0 because the fs only mounted during ignite, not after reboot.

yes, looks like the speed setting needs to be changed. I will try that. thank you all
Honored Contributor

Re: nfs slow

You can see if the duplex setting is causing the problem. Just run lanadmin amd look at the error stats in the second page. FCS and collision errors will indicate a mismatch, thus trashing your connection.

Bill Hassell, sysadmin

Re: nfs slow

First off, you're running *way* too many daemons here:

> both server & client are 11.23.
> At the server 64 nfsd, at client 100 biod.

You're using TCP for your NFS filesystems so the number of nfsds is irrelevant because those are only used for UDP requests. I would turn the number of nfsds on the server back to the default of 16 just in case there are any UDP clients still out there.

> at client 100 biod.

This is WWWWWWWWWWWAAAAAAAAAYYYYYYYYY too many biods to be running. I cannot imagine you have enough requests on the client to keep 100 biods busy - especially if all you have connecting the systems is a 100BT interface.

Also, there is a known problem of "thundering herds" in the 11.23 biod code path, so what I usually see is the more biods you run the worse your performance gets. For those of you unfamiliar with a thundering herd, it goes something like this:

1. Client application generates an NFS request
2. All 100 biods are woken
3. One of the biods is scheduled to do the request
4. The other 99 biods go back to sleep

Bottom line: with a thundering herd, the more contention there is the worse your performance gets.

Here is the set of kernel tunes I recommend to any NFS customer running 11.23:


o nfs_async_read_avoidance_enabled
This tells the NFS client to issue READ calls even if all the biods are busy servicing WRITE calls

Default Setting: 0 (Disabled)
Recommended Setting: 1 (Enabled)

o nfs_fine_grain_fs_lock
By default (0), the NFS client code uses a global system-wide semaphore to control access to many routines and data structures. This use of a global semaphore leads to a lack of parallel activity through many of the main NFS client code paths. When set to 2, the client avoids all use of this global filesystem semaphore and uses finer grained locks to protect critical code paths and data structures. The result is a much higher performing NFS client.

Default Setting: 0 (Use FS Semaphore in all code paths)
Recommended Setting: 2 (Avoid FS Semaphore in all code paths)

o nfs_new_lock_code
By default (0) when an NFS client places a lock on a file we turn off the biods and buffer cache for this file, effectively making all access to the file synchronous. When enabled (1) the client will enable the biods and buffer cache on locked files if the entire file is locked.

Default Setting: 0 (Disabled)
Recommended Setting: 1 (Use biods and buffer cache on locked files)

o nfs_new_rnode_lock_code
This instructs the NFS client to allow processes waiting to lock an rnode (NFS version of an inode) on the NFS client to be interrupted by ^C. By default these processes sleep in the kernel at a non-interruptible state.

Default Setting: 0 (threads are not interruptible while waiting to lock an rnode)
Recommended Setting: 1 (threads are interruptible while waiting to lock an rnode)

o nfs_wakeup_one
There are a couple nasty thundering herd conditions in the NFS client code. By setting this tunable to 2 both of the thundering herd conditions are avoided and the CPU contention of the system is dramatically reduced as well as throughput increased.

Default Setting: 0 (both thundering herd conditions exist)
Recommended Setting: 2 (bypass both thundering herd conditions)

o nfs3_new_acache
By default (0) the NFS client uses a linear search when walking the list of credential structures associated with a given file or directory (i.e. all the users who want to look at a given NFS file or directory). When enabled (1), the NFS client uses a hashed algorithm which can greatly increase performance and reduce CPU overhead when many users attempt to access the same shared file or directory.

Default Setting: 0 (linear credential search)
Recommended Setting: 1 (hashed credential search)


If you set the above tunables to the recommended values and decrease the biods back down to the default 16 I'd be curious if the performance improves.



Frequent Advisor

Re: nfs slow

i tried this at test server but not working:

root@univac:/# kctune nfs_new_lock_code=1
ERROR: The tunable 'nfs_new_lock_code' is not known. (If you are
trying to create a user-defined tunable, specify the -u flag.)

Re: nfs slow

Are you sure the NFS server is 11.23? What NFS/ONC patches are installed on the server?

Frequent Advisor

Re: nfs slow

NFS B.11.23 ONC/NFS; Network-File System,Information Services,Utilities
PHCO_31546 1.0 quota(1) on an NFS client
SG-NFS-Tool A.11.23.02 MC/ServiceGuard NFS Script Templates

Re: nfs slow

It doesn't look like you have any NFS patches installed. Most of the kctune parameters were delivered in patches after 11.23 shipped. I'd recommend you install the following:

PHNE_36982 11.23 libnsl cumulative patch
PHNE_36979 11.23 Kernel RPC cumulative patch
PHNE_36638 11.23 NFS cumulative patch
PHNE_36639 11.23 RPC commands and daemons cumulative patch
PHNE_35512 11.23 Lock Manager cumulative patch
PHNE_34756 11.23 Core NFS cumulative patch
PHNE_33100 11.23 AutoFS cumulative patch
PHNE_32057 11.23 CacheFS cumulative patch

Along with any dependent patches. This will patch your system with the latest NFS/ONC code and make all the kctune parameters available to you.