- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Stale NFS
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2002 03:43 PM
01-18-2002 03:43 PM
I have a K360 running HPUX 10.20 mounting through NFS above filesystem.
All of a sudden I am getting
# umount /mnt/usrigi
umount: /mnt/usrigi: Stale NFS file handle
# fuser -c /mnt/usrigi
/mnt/pea.usrigi:
pegasus # umount /mnt/pea.usrigi
umount: /mnt/pea.usrigi: Stale NFS file handle
pegasus # fuser -c /mnt/pea.usrigi
/mnt/pea.usrigi:
#
-----
fuser shows no users. I re-exported the file on the H70 and still no results. With a production system, I am not to keen about rebooting during the middle of the day.
Question:
Is their any configuration or upgrade I can do to eliminate these stale NFS's?
Is their a work around (apart from rebooting) to "freshen" up NFS?
-- Rod "Frustrated with NFS" Hills
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2002 03:44 PM
01-18-2002 03:44 PM
Re: Stale NFS
-- Rod Hills
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2002 04:13 PM
01-18-2002 04:13 PM
Re: Stale NFS
The following Procedure May or May not Clear your Stale NFS Mount Problems. If the Procedure fails you will have to reboot your clients.
As an example, suppose that this system has mounted a local file system named /here from a remote NFS server named remote.sub.dom.com:
#bdf
Filesystem kbytes used avail 37;used Mounted on
/dev/vg00/lvol1 47829 25023 18023 5837; /
Now the remote NFS server goes down. After this happens, DO NOT use bdf use mount -v to find out the name of the host and exported
file system:
# mount -v remote.sub.dom.com:/ 894630 584062 221105 7337;
/here
/dev/vg00/lvol1 on / type hfs defaults on {date}
remote.sub.dom.com:/ on /here type nfs rw,suid on {date}
The bdf and df commands will block in the kernel, which makes them unkillable. The mount -v command will not block on stale NFS mounts.
If you have issued a bdf or df try waiting several minutes before following the below procedures.
To see the processes using an NFS-mounted file system, specify the first argument that appears in the mount -v output:
# fuser remote.sub.dom.com:/
remote.sub.dom.com:/: 17858c 22566c
Specifying the local mount-point directory will not work if the NFS server is dead:
# fuser /here # does not work
Another method that will NOT work is to specify a partially-qualified hostname:
# fuser remote:/ # also does not work
remote:/: fuser: could not obtain file system ID for file remote:/
Specify the exact same argument as reported by mount -v
To unmount an NFS-mounted file system after the server has died:
1. Do not execute any commands that access the NFS mount point, such as bdf, df, or commands that try to read or write to that file system.
2. Run mount -v to get the canonical name of the host and its exported file system in the example above,"remote.sub.dom.com:/"
3. Run fuser -k, giving it the canonical name reported by mount -v:
# fuser -k remote.sub.dom.com:/
4. Run umount to unmount the remote file system:
# umount remote.sub.dom.com:/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2002 06:48 PM
01-18-2002 06:48 PM
Re: Stale NFS
# netstat -p udp
If these numbers are not zero, you need to b careful. Socket overflows indicate not enough daemons are running. Other errors likely mean an unhealthy network. Look at nfsstat also.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-19-2002 09:10 AM
01-19-2002 09:10 AM
SolutionWhat version of NFS are you running on the 10.2 server?
As 11 = ver 3 by default and 10.2 ver 2
because HP-UX 11.00 has default NFSv3 my recommend is to install a patch at your 10.20 boxes for also having NFS v3:
Load the patch PHNE_21108 (or superseeded) thats the NFS Kernel General Rel & Perf Patch which includes NFS V3
After installation you have to edit the
/etc/rc.confid.d/nfsconf where there
are new variables which controlls the
NFS behavior:
MOUNTD_VER=3
also you can enable the new automount
feature with:
AUTOFS=1
HTH
Paula
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-21-2002 07:37 AM
01-21-2002 07:37 AM
Re: Stale NFS
When I talked to HP tech support I asked specifically if an upgrade would help prevent this situation, but I was told "no".
The work around they attempted to get me started on was to use the "ifalias" command to map the ip address of the server machine to an alias on the client. I'm not sure how this would get around the problem, but I was told to look up document A5698647 (which I haven't found yet). I did not have the "ifalias" command installed at the time(it's from a patch), so I was not able to try the workaround.
Now that I have the latest patch release for NFS I will have to wait and see if the problem re-occurs.
Again thank you for all your input.
-- Rod Hills