- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- rmtab contents
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 10:39 AM
12-13-2005 10:39 AM
#rgpc06n:/usr/sap/xfer/PRD/scales
all exactly the same. I know the server mounting the file is argpc06n, but why is the first character a # and why so many?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 11:01 AM - last edited on 06-30-2022 01:53 AM by Sunitha_Mod
12-13-2005 11:01 AM - last edited on 06-30-2022 01:53 AM by Sunitha_Mod
Re: rmtab contents
Hi Matthew,
The /etc/rmtab file is used by rpc.mountd and the showmount command. It is notoriously inaccurate, but the rpc.mountd command makes every effort to keep the contents up to date.
The "#" character indicates that the client who mounted that filesystem sent in an UNMOUNT request for that filesystem. The rpc.mountd changes the first character to a "#" to effectively disable this entry so that it won't show up in future showmount -a output.
As for why there are multiple commented-out entries, that's probably a bug in the pattern matching algorithm used by rpc.mountd.
What OS version are you running? What ONC patch level are you running? There may be an ONC patch for rpc.mountd that fixes your problem.
However, since this file is all but useless (the information in this file should *never* be relied on as accurate, for many reasons) the best course of action may be to just wipe out the file periodically and let rpc.mountd start with a fresh file.
To do this you would terminate the running rpc.mountd daemon, remove the file, and re-start rpc.mountd. You can do this all on one command line so that rpc.mountd is only stopped for a second or so and your NFS clients shouldn't notice any outage.
I hope this helps,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 11:17 AM
12-13-2005 11:17 AM
Re: rmtab contents
Interestingly, 4 days ago I removed the rmtab file and rebooted the server. The rmtab at that time only had about 50 entries and was rather static in size. At least once in the past 4 days I have seen it go from about 30 entries to 0 entries, then climb back to the current 117 entries.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 11:25 AM
12-13-2005 11:25 AM
Re: rmtab contents
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 01:32 PM
12-13-2005 01:32 PM
SolutionYou wrote:
=======================================
Interestingly, 4 days ago I removed the rmtab file and rebooted the server. The rmtab at that time only had about 50 entries and was rather static in size. At least once in the past 4 days I have seen it go from about 30 entries to 0 entries, then climb back to the current 117 entries.
=======================================
I'm confused how it went from 30 entries to 0 entries without some kind of user intervention. How were you measuring the size of the file? Were you looking at the raw /etc/rmtab file with an editor like VI or were you using the showmount -a command?
If you were using showmount -a then I could understand how the number could fluctuate as that command filters out the "#" entries. So if a showmount -a at timeX gave a different list than a showmount -a at timeY the difference may have been that some of those clients sent in UMOUNT requests causing their entries to be disabled with a "#".
As for ways of determining which clients are mounting your filesystems, there really isn't a sure-fire method of capturing this shy of running a packet sniffer on the server and filtering out any NFS requests.
In a future release of HP-UX we will be introducing an enhanced NFS server logging mechanism that will allow you to log NFS transactions in the kernel and know which clients are accessing which files. That mechanism will help in cases like these.
You could enable debug logging of the rpc.mountd daemon by sending the running mountd a SIGUSR2 (kill -17
However, just knowing who is sending in MOUNT requests won't help you if there are clients in your network that have previously mounted the filesystem from the server. Once they have the root filehandle of the exported filesystem they no longer need to talk to the rpc.mountd daemon and can simply make NFS requests from these filesystems directly from the nfsd daemons/threads.
If your server is using NFS/TCP then one way to possibly track which clients are accessing the server would be to periodically use the netstat command looking for clients connected to port 2049 on your server. For example:
# netstat -an | grep 2049
tcp 0 0 *.2049 *.* LISTEN
tcp 0 0 15.43.209.141.2049 15.43.214.58.1021 ESTABLISHED
This output shows a client with IP address 15.43.214.58 is connected to my server at port 2049 (nfs). This trick only works for TCP clients, not UDP clients.
One final suggestion is to monitor the contents of the /var/statmon/sm directory on the server. This directory maintains a list of clients that perform NFS file locking with the server. So, this directory would only contain the names of systems that are not only mounting filesystems and sending NFS requests, but also locking files via these NFS filesystems. It's not very likely that this directory will tell you about clients that you couldn't already know about from using debug rpc.mountd logging, netstat commands, or a packet sniffer, but I thought I'd mention it anyway.
As I said, there is no one foolproof method (aside from a packet sniffer) to definitively know which clients are hitting your NFS server, but hopefully these suggestions will help you figure out most of them.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2005 04:14 AM
12-14-2005 04:14 AM
Re: rmtab contents
I gave you an extra 8 points for your very detailed 2nd answer. Thank you.
I was checking the raw /etc/rmtab file by doing a cat on it. Earlier in the day I am sure I had 30 entries in it, then later it was empty. I did not touch it and since I am the only sysadmin, no one else did either, so I am puzzled as well, and also as to why for 2 years it was quite stable and now that I deleted the original file, why it is growing. It is up to 180 entries now. I will do as you suggested... ie trim it every so often to keep the size down. Thanks again for your help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2005 04:19 AM
12-14-2005 04:19 AM