- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- NFS problem (?) with CATIA V4
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 02:54 AM
09-23-2004 02:54 AM
We have several HPUX workstations (11.0 and 11.11) accessing a HPUX 11.11 nfs file server (rp7410).
Sometimes save operation from catia V4 fails (from any workstation). The error message is: Device full.
No local or remote volumes is full (max 70% full)
The problem appears randomely. When we try several times to saved it achieve to save.
Has someone ever met such situation ?
Is there a solution ?
Thanks,
EM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 03:00 AM
09-23-2004 03:00 AM
Re: NFS problem (?) with CATIA V4
how much space do you have at your device where /tmp is?
Save works then without any changes?
Or do you change for example file name or path where you want to save?
Catia has problems with pathes which are too long ....
No more ideas at the moment
Volkmar
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 03:10 AM
09-23-2004 03:10 AM
Re: NFS problem (?) with CATIA V4
Do you have any disk quotas set on any of the local or remote filesystems for the user running Catia? Although exceeding a disk quota would likely return a different error than "Device full", the application can certainly report the error any way it desires.
Figured it was worth checking before diving into NFS debugging.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 03:38 AM
09-23-2004 03:38 AM
Re: NFS problem (?) with CATIA V4
There is no quota set on filesystems
/tmp and /var/tmp are only 2 % used. Their size is large enough to hold 10 times the largest file we use.
We suspect the error message displayed by Catia is due to the fact Catia is not able to store the file (lock ?)
We have checked all logs (syslog, rpc.lockd, nettl) on server and clients. There is no error.
lanadmin reports 0 error on server and client
the switches too.
That's why we suspect NFS but we are not really convinced !
EM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 03:42 AM
09-23-2004 03:42 AM
Re: NFS problem (?) with CATIA V4
If you suspect file locking is the problem, have you tried using a program that simply does file locking to ensure that files can be locked successfully from the client to the NFS filesystem? If you need a file locking program I can provide one.
By the way, looking in the /var/adm/rpc.lockd.log and /var/adm/rpc.statd.log will probably not show you anything unless debug logging is enabled for these daemons or a "hard" error occurs.
If you determine that NFS file locking is failing, I can help you troubleshoot that.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 03:56 AM
09-23-2004 03:56 AM
Re: NFS problem (?) with CATIA V4
I have activated debug mode for rpc.lockd (kill -17) . I am not used with reading that kind of output.
How to see a problem ?
I really appreciate your help for troubleshooting the problem !!!
Well now, what to do ?
EM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 04:00 AM
09-23-2004 04:00 AM
Re: NFS problem (?) with CATIA V4
My suggestion would be to enable debug logging on both rpc.lockd and rpc.statd on both the client and the server via the kill -17 procedure and then reproduce the problem. Once you've reproduced it, send the rpc.lockd and rpc.statd on both systems another kill -17 to turn logging off.
You should have a /var/adm/rpc.lockd.log and /var/adm/rpc.statd.log file from both the client and the server. If you're not familiar with reading them, you can either post them to this thread or email them to me and I will tell you what they show.
My email address is in my forum profile.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2004 08:56 PM
09-23-2004 08:56 PM
Re: NFS problem (?) with CATIA V4
I have activated the debug trace just before a workstation named cao74 get the error message. Our server is named zeus.
You will find herewith the 4 trace log files.
Thanks again for your help,
EM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2004 04:01 AM
09-24-2004 04:01 AM
SolutionI've looked at your log files and I believe I know what the problem is.
First of all, *and this is not the problem*, the rpc.statd log files from both systems look normal. The server's rpc.statd log file shows that there are several systems it is unable to communicate with, so it ends up waking up every 15 seconds to try to communicate with them, which is a waste of effort if these systems no longer exist.
By looking at the log file you can see the systems rpc.statd is trying to contact every 15 seconds by looking for entries such as these:
statd_call_statd notifying system rs23b about zeus
statd_call_statd notifying system rs42b about zeus
statd_call_statd notifying system rs43b about zeus
statd_call_statd notifying system rs40b about zeus
statd_call_statd notifying system rs25b about zeus
statd_call_statd notifying system rs26b about zeus
statd_call_statd notifying system rs28b about zeus
statd_call_statd notifying system cao26 about zeus
statd_call_statd notifying system cao19 about zeus
statd_call_statd notifying system ltspc0672 about zeus
You should check to ensure these systems are available, and if they are obsolete systems that are no longer in your environment you should schedule a time to remove the entries from /var/statmon/sm.bak on the server. Before removing these entries you'll need to stop rpc.lockd and rpc.statd. Be sure to never remove entries from /var/statmon/sm!
Ok, now on to your specific problem.
The rpc.lockd log file from client cao74 shows it is holding many locks with zeus and obtaining new locks during your reproduction, so clearly these two systems can communicate without a problem and file locking works between them.
The problem comes at this point in the log file:
09.24 10:46:37 cao74 pid=894 rpc.lockd
NLM_PROG+++ version 4 proc 12
09.24 10:46:37 cao74 pid=894 rpc.lockd
/usr/sbin/rpc.lockd: msg reply(2) to procedure(12)
09.24 10:46:37 cao74 pid=894 rpc.lockd
nlm_res_routine(400b9e78)
09.24 10:46:37 cao74 pid=894 rpc.lockd
enter cont_lock
09.24 10:46:37 cao74 pid=894 rpc.lockd
klm_reply: stat=2
09.24 10:46:37 cao74 pid=894 rpc.lockd
release_le: pre_le not free yet
09.24 10:46:37 cao74 pid=894 rpc.lockd
release_le: pre_fe not free yet
This shows the client is receiving a reply from the server to an earlier lock request and the server is denying the lock "stat=2". Looking at the server's log file for the same lock request, we see this:
09.24 10:46:44 zeus pid=1878 rpc.lockd
NLM_PROG+++ version 4 proc 7
09.24 10:46:44 zeus pid=1878 rpc.lockd
/usr/sbin/rpc.lockd: range(0, 0)
09.24 10:46:44 zeus pid=1878 rpc.lockd
enter proc_nlm_lock_msg(4011d148)
09.24 10:46:44 zeus pid=1878 rpc.lockd
enter local_lock
09.24 10:46:44 zeus pid=1878 rpc.lockd
/usr/sbin/rpc.lockd: fcntl (local_lock) : errno = 46!
09.24 10:46:44 zeus pid=1878 rpc.lockd
nlm4_reply: (cao74, 12), result = 2
09.24 10:46:44 zeus pid=1878 rpc.lockd
call_udp[cao74, 100021, 4, 12] returns 0
The critical line is:
/usr/sbin/rpc.lockd: fcntl (local_lock) : errno = 46!
This indicates that rpc.lockd issued an fcntl() call on the file, in order to place the requested lock on the file, and the fcntl() call returned an error 46. Looking at /usr/include/sys/errno.h for 46 we see:
#define ENOLCK 46 /* System record lock table was full */
The server's file lock table is full, which explains why the error would happen at different times and on different clients, as the problem would likely only occur when multiple clients are locking files on the server at the same time.
Looking at the number of file locks in the server's queue at the time of the error we see:
used_le=188, used_fe=112, used_me=53
Which means there are 188 locks being held by the server at this time when it tries to obtain the new lock and failed.
My guess is the server's "nflocks" kernel parameter is sized too small for your environment. On HP-UX 11.11 this parameter defaults to 200, which is way too small for a typical NFS server. In fact, in HP-UX 11i v2 (i.e. 11.23) we raised the default to 4096.
What is your server's nflocks variable set to today? If it is set to the default of 200, I recommend increasing it to a value like 4096. You will need to rebuild your kernel and reboot the server to get this new value to take place. (On HP-UX 11i v2 systems, this variable can be tuned dynamically without requiring a reboot.)
Since you will be rebooting your server to increase nflocks, you might want to take this opportunity to clean out any obsolete files from your /var/statmon/sm.bak directory so that rpc.statd will no longer try to contact these systems every 15 seconds.
Let me know if you have any questions about my findings in the log files or my recommendations.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2004 06:52 PM
09-26-2004 06:52 PM
Re: NFS problem (?) with CATIA V4
The nflocks parameter is set to the default (200) on our server.
We will plan tonight a restart of the server with modification of nflocks to 4096 as you recommend it.
Once again I thank you a lot for your explanations.
Best regards,
EM