- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: How to flush stale NFS meta data?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-30-2004 12:49 AM
тАО07-30-2004 12:49 AM
this is indirectly related to my SAP NFS thread.
These SAP NFS exports and mounts are causing me some grief.
On a client I had to remount an export that was reported on the server as being exported to this client.
However, I wasn't able to mount the export on the client as it claimed that the device was busy.
This mess must all be owe to expired NFS meta data.
I finally only succeeded in remounting by doing an rmdir and new mkdir of the mountpoint.
This is far from satisfactory.
Besides, I encounter similar oddities from long expired exports and mounts.
For instance a showmount -d displays active mounts that do not longer exist, and which even exportfs doesn't report as being exported anymore on the server (because the server inadvertantly had to be rebooted without signalling the client to release the mount).
Is there a way to flush all the false NFS meta data?
I looked at /etc/{xtab,rmtab} but there weren't entries for the expired export and mounts on either server and client in it anymore.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-30-2004 01:45 AM - last edited on тАО06-30-2022 01:18 AM by Sunitha_Mod
тАО07-30-2004 01:45 AM - last edited on тАО06-30-2022 01:18 AM by Sunitha_Mod
SolutionHi Ralph,
I'm not really sure what you mean by "expired" NFS metadata. However, I will say this - I would never base any NFS admin decisions on anything returned by either the "showmount -d" or "showmount -a" commands. Any commands that use the /etc/rmtab file are going to return bogus information nearly every time.
The /etc/rmtab file, as you know, holds information about which clients have mounted filesystem from the local NFS server. This file is notoriously inaccurate. For example, I picked one of my systems at random and looked at the /etc/rmtab file:
# cat /etc/rmtab
hpatcux4.rose.hp.com:/tmp
hpatcux5.rose.hp.com:/tmp
atc01.cup.hp.com:/tmp
(anon):/home/dolker
ros87252olk.rose.hp.com:/home/dolker
atc03.cup.hp.com:/home/dolker
All of these entries are bogus. This system hasn't had an NFS client mount it's filesystems in several months, or likely years. Some of these systems no longer exist in my environment. They were decommisioned long ago.
The problem with /etc/rmtab is that the entries are only cleaned up when an NFS client explicitly issues a umount command for the filesystem in question. Only then does rpc.mountd invalidate the entry in /etc/rmtab. If an NFS client goes away without formally unmounting then you'll get stale entries in /etc/rmtab. Don't even get me started on PC-NFS clients, since they go away faster than any UNIX clients I've ever seen.
The way I've always recommended cleaning up the /etc/rmtab file is:
On the NFS server
1. Terminate the running rpc.mountd
2. rm /etc/rmtab
3. Restart rpc.mountd
Obviously you'd want to do this at a time when no NFS clients are trying to mount filesystems. However, you could issue all of these commands on a single string, or put them in a small script, and it would run in under a second. The only NFS mounts that would be denied would be those that come in while rpc.mountd is down, and that shouldn't be very long in this case.
Hope this helps. If I've completely misunderstood your question then I apologize, but whenever I see statements like "I had to remount because the client was reported as being mounted" I get nervous that the old unreliable /etc/rmtab file is wreaking havoc again.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-01-2004 05:59 PM
тАО08-01-2004 05:59 PM
Re: How to flush stale NFS meta data?
thanks for giving me a little lesson on NFS basics.
I know I should educate myself, and I even have excellent reference lying around in my bookshelf (e.g. O'Reilly's NFS/NIS admin book).
The HP-UX "Installing and Administering NFS" isn't bad either.
But I did lack the time, and there was so much other reference to read with higher priority.
This is the reason for my wrong notion of "meta data", because I had suspected a more sophisticated storage of mount and export tables, hidden somewhere in the protocol.
I wouldn't have expected such a simple database as flat files like /etc/{x,rm}tab.
So it looks that e.g. /etc/rmtab is only updated when rpc.mountd starts or a client is able to get its umount RPC transmitted to the server.
Then I will do as you suggest, and keep the table updated by sending rpc.mountd on the server an appropriate signal.
As on signals, I've come across a partly undocumented feature (at least as far as the manpages are concerned) in the troubleshooting section of the above mentioned HP-UX NFS manual.
It there reads that when the rpc.mountd proc receives a SIGUSR2 it will go into logging mode.
So I sent it this signal on the server.
Now I'm in doubt whether this will further degrade the already poor performance even more, which I'd expect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-01-2004 06:02 PM
тАО08-01-2004 06:02 PM
Re: How to flush stale NFS meta data?
So this will happen this week, and where I will have a chance to talk to the SAP consultant.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-02-2004 02:11 AM - last edited on тАО06-30-2022 01:17 AM by Sunitha_Mod
тАО08-02-2004 02:11 AM - last edited on тАО06-30-2022 01:17 AM by Sunitha_Mod
Re: How to flush stale NFS meta data?
Hi Ralph,
Many of HP's ONC/RPC daemons support the SIGUSR2 signal to toggle debug logging on and off. The list includes:
rpc.lockd
rpc.statd
rpc.mountd
automountd
This logging facility is incredibly important to know about, as it allows the administrator to turn on debug logging without having to terminate/restart the daemons. Thus, when a problem occurs, the admin can toggle logging on, reproduce the problem, and toggle logging back off (using another SIGUSR2) and have a log file containing only the relevant data for the problem they've encountered. This aids in troubleshooting these daemons enourmously. I document this debug logging facility throughout my NFS performance white paper and Optimizing NFS Performance book.
When you say you'll keep the /etc/rmtab table updated by sending the appropriate signal, are you meaning you'll terminate the daemon, wipe out the file, and restart the daemon, or is there some other signal you're referring to? The SIGUSR2 signal has no effect on the /etc/rmtab file contents. Just checking...
As for the debug logging mechanism's affect on rpc.mountd performance, yes there will be some impact. However, I've seen systems with debug rpc.moutnd logging enabled that still are able to process dozens of MOUNT requests per second.
Keep in mind that rpc.mountd only gets involved at certain times - MOUNT, UNMOUNT, or when someone issues the showmount command against this system. Unless you're talking about an extremely busy NFS server that is getting tons of requests to rpc.mountd, I doubt you'll see a negative impact to the debug logging influence.
However, if you're not currently seeing a problem with your NFS server, why would you leave debug rpc.mountd logging enabled? Just curious.
One other thing to remember - debug logging from rpc.mountd is extremely verbose, so you need to make sure you have enough disk space available in /var/adm to handle the growing log file.
Best regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
