1833776 Members
2184 Online
110063 Solutions
New Discussion

Stale NFS

 
SOLVED
Go to solution
Rodney Hills
Honored Contributor

Stale NFS

I have a H70 running HPUX 11.11 exporting a filesystem.
I have a K360 running HPUX 10.20 mounting through NFS above filesystem.

All of a sudden I am getting

# umount /mnt/usrigi
umount: /mnt/usrigi: Stale NFS file handle
# fuser -c /mnt/usrigi
/mnt/pea.usrigi:
pegasus # umount /mnt/pea.usrigi
umount: /mnt/pea.usrigi: Stale NFS file handle
pegasus # fuser -c /mnt/pea.usrigi
/mnt/pea.usrigi:

#

-----
fuser shows no users. I re-exported the file on the H70 and still no results. With a production system, I am not to keen about rebooting during the middle of the day.

Question:
Is their any configuration or upgrade I can do to eliminate these stale NFS's?

Is their a work around (apart from rebooting) to "freshen" up NFS?

-- Rod "Frustrated with NFS" Hills
There be dragons...
5 REPLIES 5
Rodney Hills
Honored Contributor

Re: Stale NFS

PS - I did scan the forums on this topic and found no help...

-- Rod Hills
There be dragons...
S.K. Chan
Honored Contributor

Re: Stale NFS

From Knowledge Mine..

The following Procedure May or May not Clear your Stale NFS Mount Problems. If the Procedure fails you will have to reboot your clients.

As an example, suppose that this system has mounted a local file system named /here from a remote NFS server named remote.sub.dom.com:

#bdf
Filesystem kbytes used avail 37;used Mounted on
/dev/vg00/lvol1 47829 25023 18023 5837; /

Now the remote NFS server goes down. After this happens, DO NOT use bdf use mount -v to find out the name of the host and exported
file system:

# mount -v remote.sub.dom.com:/ 894630 584062 221105 7337;
/here

/dev/vg00/lvol1 on / type hfs defaults on {date}

remote.sub.dom.com:/ on /here type nfs rw,suid on {date}

The bdf and df commands will block in the kernel, which makes them unkillable. The mount -v command will not block on stale NFS mounts.

If you have issued a bdf or df try waiting several minutes before following the below procedures.

To see the processes using an NFS-mounted file system, specify the first argument that appears in the mount -v output:

# fuser remote.sub.dom.com:/
remote.sub.dom.com:/: 17858c 22566c

Specifying the local mount-point directory will not work if the NFS server is dead:
# fuser /here # does not work

Another method that will NOT work is to specify a partially-qualified hostname:

# fuser remote:/ # also does not work

remote:/: fuser: could not obtain file system ID for file remote:/

Specify the exact same argument as reported by mount -v

To unmount an NFS-mounted file system after the server has died:

1. Do not execute any commands that access the NFS mount point, such as bdf, df, or commands that try to read or write to that file system.

2. Run mount -v to get the canonical name of the host and its exported file system in the example above,"remote.sub.dom.com:/"

3. Run fuser -k, giving it the canonical name reported by mount -v:

# fuser -k remote.sub.dom.com:/

4. Run umount to unmount the remote file system:
# umount remote.sub.dom.com:/


Bill Hassell
Honored Contributor

Re: Stale NFS

As you can see, using NFS can be detrimental to the health of any production server. NFS critically depends on reliable (and not overloaded) networking as well as the latest patches. Be sure to check on the health of your NFS connections with:

# netstat -p udp

If these numbers are not zero, you need to b careful. Socket overflows indicate not enough daemons are running. Other errors likely mean an unhealthy network. Look at nfsstat also.


Bill Hassell, sysadmin
Paula J Frazer-Campbell
Honored Contributor
Solution

Re: Stale NFS

Rodney

What version of NFS are you running on the 10.2 server?

As 11 = ver 3 by default and 10.2 ver 2

because HP-UX 11.00 has default NFSv3 my recommend is to install a patch at your 10.20 boxes for also having NFS v3:

Load the patch PHNE_21108 (or superseeded) thats the NFS Kernel General Rel & Perf Patch which includes NFS V3
After installation you have to edit the
/etc/rc.confid.d/nfsconf where there
are new variables which controlls the
NFS behavior:
MOUNTD_VER=3
also you can enable the new automount
feature with:
AUTOFS=1


HTH

Paula
If you can spell SysAdmin then you is one - anon
Rodney Hills
Honored Contributor

Re: Stale NFS

Thanks for all your input. I ended up having to do a reboot this week-end anyway (to install v3 of NFS).

When I talked to HP tech support I asked specifically if an upgrade would help prevent this situation, but I was told "no".

The work around they attempted to get me started on was to use the "ifalias" command to map the ip address of the server machine to an alias on the client. I'm not sure how this would get around the problem, but I was told to look up document A5698647 (which I haven't found yet). I did not have the "ifalias" command installed at the time(it's from a patch), so I was not able to try the workaround.

Now that I have the latest patch release for NFS I will have to wait and see if the problem re-occurs.

Again thank you for all your input.

-- Rod Hills
There be dragons...