- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Enhanced AutoFS mapfiles
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2004 02:38 AM
10-04-2004 02:38 AM
I have the enhanced automountd running.
# UNIX95= ps -x -C automountd
PID TTY TIME CMD
821 ? 04:46 /usr/lib/netsvc/fs/enh_autofs/automountd
From documentation and other threads here I heard that the enhanced AutoFS mounter is capable of RPC over TCP, and therefor shall respect the proto=tcp mount option.
I tried this as a global setting for all direct maps in the auto_master file.
But it looks I still have some syntactic twist in it
# cat /etc/auto_master
/net -hosts -nosuid,soft,nobrowse
/- /etc/auto_direct -proto=tcp
#/nfs /etc/auto_ansic
because when I tell automount to reread the maps it complains
# /usr/sbin/automount
automount: mount /sapmnt/Z01: Invalid argument
Huh, the mentioned direct mount used to work before
# grep sapmnt /etc/auto_direct
/sapmnt/Z01 lena:/export/sapmnt/Z01
The stuff is also exported from lena to us, and in fact still mounted
# showmount -e lena|grep sapmnt|grep -c $(uname -n)
1
# bdf -t nfs
Filesystem kbytes used avail %used Mounted on
lena:/export/sapmnt/Z01
1048576 603632 441504 58% /sapmnt/Z01
"Now to something completely different",
I wonder whether I should use direct or indirect maps for performance reasons, whereever the latter is possible (viz. not covering up directory branches)?
Rgds.
Ralph
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2004 03:45 AM
10-04-2004 03:45 AM
Re: Enhanced AutoFS mapfiles
cat /etc/auto_master
#/net -hosts -nosuid,soft
/- /etc/auto.direct proto=tcp
Remove the - in front of proto
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2004 04:33 AM
10-04-2004 04:33 AM
SolutionThe syntax of your /etc/auto_master map is correct. The reason why you got the "Invalid argument" error is because the filesystem in question is currently mounted, and AutoFS is trying to change the mount option for a currently mounted filesystem. This is not possible. You need to wait until AutoFS unmounts this filesystem before changing the mount options. Once /sapmnt/Z01 is unmounted you should be able to add the "-proto=tcp" option back to the master map and issue the /usr/sbin/automount -v command again with success.
One question about using TCP - since Enhanced AutoFS only runs on 11i and higher, and since TCP is the default protocol used by 11i and higher clients, why do you feel the need to specify this option to get TCP semantics? You should be getting an NFS/TCP mount without having to specify "proto=tcp".
The way the NFS client is designed, it should try TCP first and if it is unavailable from the server it will try UDP. If you specify "proto=tcp" and the server doesn't support TCP your NFS mount will fail and you won't have an NFS mounted filesystem with UDP. Is this what you want, or are you trying to force TCP semantics from a server that doesn't support TCP?
Regarding direct vs. indirect maps, there used to be a performance difference with the old AutoFS, but I don't know of any differences between map types with the Enhanced AutoFS. If there is some specific concern with direct vs. indirect, please let me know.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-04-2004 11:52 PM
10-04-2004 11:52 PM
Re: Enhanced AutoFS mapfiles
you were right.
Once the autmounter had to remount the share it did so using the TCP protocol, at least gathering from nfststat's mount dump
# nfsstat -m|sed 's/Addr.*//'
/sapmnt/Z01 from lena:/export/sapmnt/Z01 (
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
/audit from lena:/export/audit (
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
Yet, I cannot identify the protocol from the mere mnttab entry
# grep nfs /etc/mnttab
lena:/export/sapmnt/Z01 /sapmnt/Z01 nfs nodevs,rsize=32768,wsize=32768,NFSv3,dev=5f000005 0 0 1096045941
lena:/export/audit /audit nfs nodevs,rsize=32768,wsize=32768,NFSv3,dev=5f000048 0 0 1096901358
But there's a tcp connection to the nfs server's nfsd port established
# netstat -anfinet|awk '$5~/\.2049/{print$1,$NF}'
tcp ESTABLISHED
I haven't yet been able to boot an enhanced autofs enabled kernel on the nfs server for the lack of allowed downtime.
Thus, how can I verify that the running nfs server is capable of tcp transmissions
(well, it surely must be, otherwise the client's nfsstat would lie)?
I also wasn't aware that tcp is the preferred mode of transport in ENHAUTOFS.
Under this premise you're right that the stating of the proto=tcp mount option would even be detrimental, in case tcp transport weren't available on behalf of the server, as you correctly stress.
Thus I will better omit it altogether.
Although the client where I already was able to install and run ENAUTOFS has only been loaded lightly I can see from nfsstat that there have accumulated fewer retranssmissions and badxids since I last reset the counters.
# nfsstat -cr
Client rpc:
Connection oriented:
calls badcalls badxids
240463 3 3
timeouts newcreds badverfs
3 0 0
timers cantconn nomem
0 0 0
interrupts
0
Connectionless oriented:
calls badcalls retrans
0 0 0
badxids timeouts waits
0 0 0
newcreds badverfs timers
0 0 0
toobig nomem cantsend
0 0 0
bufulocks
0
This looks very promissing to me.
It looks as if even the protocol overhead of tcp outperforms udp PDUs due to fewer packet losses and thus the need for fewer retransmissions.
I think I will be upgrading all NFS clients and the clustered NFS servers to ENHAUTOFS.
On the latter I will only have to take care that the nfsconf file reads this entry
AUTOMOUNTD_OPTIONS=-L
if I have understood the docs and your contributions correctly.
Apart from these measures I will definetly have to install the latests QualityPack patch bundle.
The last installed dates back to 2002.
I also will add the latest cumulative NFS and STREAMS patches.
Are there others you'd suggest?
As for the usage of direct vs. indirect maps,
I think to have read in the ENHAUTOFS docs that indirect maps allow a client to stat (e.g. ls) mounts without the need to actually mount them (can be toggled off by the mount option nobrowse).
Should there however be further need for nfs performance tuning I will check with your recommendations you gave in your guide (e.g. kernel tunables, nfs mount options).
As for the required amount of various nfs auxiallary daemons you also mentioned in your guide, I'm still a bit puzzled.
On the one hand you say that there are situations where the number of clients' biods needs to be risen, on the other hand you're reasoning that there also could be cases where the number should be reduced to even zero.
I guess this, as with certain mount options, require thorough testing?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-05-2004 03:02 AM
10-05-2004 03:02 AM
Re: Enhanced AutoFS mapfiles
You wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# nfsstat -m|sed 's/Addr.*//'
/sapmnt/Z01 from lena:/export/sapmnt/Z01 (
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
/audit from lena:/export/audit (
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,rsize=32768,wsize=32768,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)
Yet, I cannot identify the protocol from the mere mnttab entry
# grep nfs /etc/mnttab
lena:/export/sapmnt/Z01 /sapmnt/Z01 nfs nodevs,rsize=32768,wsize=32768,NFSv3,dev=5f000005 0 0 1096045941
lena:/export/audit /audit nfs nodevs,rsize=32768,wsize=32768,NFSv3,dev=5f000048 0 0 1096901358
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On HP-UX, we don't put every single mount option in the /etc/mnttab entry for the mounted filesystem. The best indication of which protocol a specific NFS filesystem is using is the nfsstat -m output you collected earlier.
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
I haven't yet been able to boot an enhanced autofs enabled kernel on the nfs server for the lack of allowed downtime.
Thus, how can I verify that the running nfs server is capable of tcp transmissions
(well, it surely must be, otherwise the client's nfsstat would lie)?
I also wasn't aware that tcp is the preferred mode of transport in ENHAUTOFS.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Enhanced AutoFS has absolutely nothing to do with whether the client uses UDP or TCP for a mount point. It is the NFS client itself that determines which protocol is used by default, unless you specifically override the defaults by using the "proto=" option. By default, an 11i client will always request to use TCP first and UDP second - regardless of whether the NFS filesystem is mounted manually or via AutoFS.
The only exception to this rule was when the OLD automounter was used (not the ONC 1.2 AutoFS, the legacy automount daemon). That old automounter only supported UDP and NFS PV2, so if you used that old automounter that is all you would get - PV2/UDP. Once AutoFS was added to HP-UX, you had the ability to use whatever protocols and versions the NFS client supported.
Of course, in order to get a TCP mount, both client and server would have to support it. HP added support for NFS/TCP via patches in 11.0 and integrated support for NFS/TCP in 11i, so it has been around for a long time (4.5 years).
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Under this premise you're right that the stating of the proto=tcp mount option would even be detrimental, in case tcp transport weren't available on behalf of the server, as you correctly stress.
Thus I will better omit it altogether.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I agree with this. If both client and server support TCP then you will get a TCP mount, even if you don't specify "proto=tcp" in your maps. If the server doesn't support TCP, you will end up with a UDP mount, which I believe most customers would find more desireable than a failing mount.
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Although the client where I already was able to install and run ENAUTOFS has only been loaded lightly I can see from nfsstat that there have accumulated fewer retranssmissions and badxids since I last reset the counters.
# nfsstat -cr
Client rpc:
Connection oriented:
calls badcalls badxids
240463 3 3
timeouts newcreds badverfs
3 0 0
timers cantconn nomem
0 0 0
interrupts
0
Connectionless oriented:
calls badcalls retrans
0 0 0
badxids timeouts waits
0 0 0
newcreds badverfs timers
0 0 0
toobig nomem cantsend
0 0 0
bufulocks
0
This looks very promissing to me.
It looks as if even the protocol overhead of tcp outperforms udp PDUs due to fewer packet losses and thus the need for fewer retransmissions.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
That is clearly a benefit of TCP over UDP - less chance of retransmitting full NFS requests due to temporary packet loss.
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
I think I will be upgrading all NFS clients and the clustered NFS servers to ENHAUTOFS.
On the latter I will only have to take care that the nfsconf file reads this entry
AUTOMOUNTD_OPTIONS=-L
if I have understood the docs and your contributions correctly.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Correct. That would be my recommendation on the cluster servers.
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Apart from these measures I will definetly have to install the latests QualityPack patch bundle.
The last installed dates back to 2002.
I also will add the latest cumulative NFS and STREAMS patches.
Are there others you'd suggest?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I'd recommend getting the latest ONC, STREAMS, Transport, LAN Common, and whatever network interface patch (GiG, 100BT, etc.). If these are all contained in a quality pack bundle then that would be a good way to go.
You also wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
As for the usage of direct vs. indirect maps,
I think to have read in the ENHAUTOFS docs that indirect maps allow a client to stat (e.g. ls) mounts without the need to actually mount them (can be toggled off by the mount option nobrowse).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Correct. The browsability feature is specific to indirect maps. This would be one benefit of indirect over direct, provided you use browsability properly (i.e. only look as far as the parent directory - once you cd into the actual indirect mount point the mount will occur).
Finally, :) You wrote:
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
Should there however be further need for nfs performance tuning I will check with your recommendations you gave in your guide (e.g. kernel tunables, nfs mount options).
As for the required amount of various nfs auxiallary daemons you also mentioned in your guide, I'm still a bit puzzled.
On the one hand you say that there are situations where the number of clients' biods needs to be risen, on the other hand you're reasoning that there also could be cases where the number should be reduced to even zero.
I guess this, as with certain mount options, require thorough testing?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Every recommendation I make in my paper and book are merely suggested starting points for users to do further testing. The number of biods is a perfect example - in some environments 4 will be the right number, in others 16 will be, in others 0 will be. The only way to know for sure is to test with different values and see which one works best for your applications.
Let me know if you have any other NFS questions.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
