1847895 Members
2475 Online
104021 Solutions
New Discussion

File table almost full

 
SOLVED
Go to solution
Gary Hines
Advisor

File table almost full

I was getting a periodic "File table almost full" from glance. When I run sar -v (now after the fact), I'm showing about 4000-5500/7659 in the file-sz column. Is there a way to see what files are open or get an idea of what is happening? I know I need to increase nfile, but this has never happened before, and I can't think of anything out of the normal happening to cause more open files lately. It would be nice if I could get a clue as to what is taking up the file locks. Thanks for any help.
4 REPLIES 4
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: File table almost full

You can download and install lsof from:

http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/lsof-4.77/

lsof beats the standard utility, fuser, hands down.
If it ain't broke, I can fix that.
James R. Ferguson
Acclaimed Contributor

Re: File table almost full

Hi Gary:

Drilling down into various proceses with 'glance' or with 'lsof' may offer some insight.

However, depending upon the nature of your application, the number of users, etc., it is not unlikely that you have simply reached the 'nfile' ceiling.

You don't describe your operating system release nor the environment. The kernel parameter 'nfile' controls the number of slots in a process table that are available. As such, each entry consumes very little memory and thus inflating 'nfile' is not costly nor detrimemental to performance.

Ignore any formulae associated with 'nfile' and simply set it to a higher value. Be generous, on systems with more than 1GB of memory, the default is 65536.

Regards!

...JRF...
Bill Hassell
Honored Contributor

Re: File table almost full

And as a note, nfile is not just for disk files, it is for all files including device files as well as network port connections. Glance or lsof will id the files currently open for each process but unless you wrote the code, there is not much to tell you whether this is normal or unusual. Ifr you really need to track which programs are contributing to the use of file handles, you could run sar -v along with ps -e over and over for several days (careful about the impact on production). Then analyze the actual usage versus processes that are running.

Or simply make nfile MUCH larger, perhaps 15000 or even 20000 (it's easier). There is no practical limit to the size nfile (millions have been reported on production systems). Systems always seem to grow so expect that nproc, nfile, nflocks and perhaps other items like semaphores and shared memory parameters may need to be adjusted. If you have some runaway process(es), you'll likely see the load with other tools.


Bill Hassell, sysadmin
Gary Hines
Advisor

Re: File table almost full

Thanks for all the advice. I am running HPUX 11.11 with 2 Gb of RAM, but we are soon upgrading so the advice on the parameters is greatly appreciated.

I've also downloaded lsof and I'm going to try and play with that a bit to see if it indicates anything.

Thanks again for all the help.