1833762 Members
2854 Online
110063 Solutions
New Discussion

very large nfile setting

 
SOLVED
Go to solution
Tim Rotunda
Frequent Advisor

very large nfile setting

Wondering who has nfile set above 65k? Anyone with nfile set to 2M+? Why?
Thanks,
Tim
8 REPLIES 8
Steven E. Protter
Exalted Contributor

Re: very large nfile setting

Shalom Tim,

I have not done that. There are probably limits .

nfile to 2 million means that 2 million file handles would be open at once. It would have to be a very large, multi user system with many, many users or lots of database instances running to need such a high setting.

Any reason why you are considering such a setting?

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
James R. Ferguson
Acclaimed Contributor

Re: very large nfile setting

Tim:

The value you need would be entirely dependent upon your specific environment. For systems with more than 1GB of memory, the default is 64K. The maxium value is constrained to a 32-bit integer. The memory needed by the kernel to support the 'nfile' table is quite small, so the overhead is also very small.

The value of 'nfile' must be nfile must be equal to or greater than twice the value of 'maxfiles_lim' (the per process open file limit).

Regards!

...JRF...
Alzhy
Honored Contributor

Re: very large nfile setting

Ours is set to 100,000. A 16-cpu SuperDome nPar with 80GB of memory running a single instance of Oracle serving about 2000 users. Remember everything in UNIX deals with a file that's why the huge need.

I could only surmise nfile environments set to 2M+ will probably be either of the following:

1.) An even bigger DB server doing OLTP
2.) A really big server acting as fileserver (NFS or SAMBA)
3.) A huge web or application server capable of serving many many users and connections at the same time.

Hakuna Matata.
Bill Hassell
Honored Contributor
Solution

Re: very large nfile setting

I know of a couple of large systems that have nfile at 1.5 and 2 million. For a 64bit system, there is no real limit except perhaps maximum kernel data area (but that may be nfile set to dozens of millions). Why? Some very, very bad legacy software that tried to avoid using a real database and instead, created hundreds of millions of small files, then wrote programs to open 10's of thousands of files at the same time in several hundred programs.

The fact that HP-UX allows millions of files to be open at the same time does NOT mean it is a good idea. There are two questions when these extreme numbers are encountered: are the millions of files being opened expected behavior, or the result of badly coded runaway programs? And if the millions of files are expected, is the price of the design worth the administratives costs (including program maintenance/debug, slow backup speeds due to millions of small files, etc)?


Bill Hassell, sysadmin
Tim Rotunda
Frequent Advisor

Re: very large nfile setting

This is an rx4640 with 2 MX2 CPUs and 64GB of RAM.

The load is 3 PostgreSQL database environments/clusters, each with ~180 server processes, each set of 180 processes serving a set of ~2000 files or relations represented by ~2000 files

Certainly OLTP with a max user count of 900 for all three environments, 300 for each.

Comments?
Thanks,
Tim
Alzhy
Honored Contributor

Re: very large nfile setting

Setting nfiles to a very large value does not really impact memory immediately/boot-time. It's impact is the potential for your system to ran amock (along with max procs, etc..).

The reason why we set kernel parameters dealing with how many processes can run, no file handles, filelocks, etc. is to lessen the potential for a system to go wild - or if it ever goes wild, to increase the chance that an admin can still get nto the system and look around.

I suggest you use "lsof" to figure out and study what should be your normal nfile setting.
Hakuna Matata.
Carlos Roberto Schimidt
Regular Advisor

Re: very large nfile setting

Hi,

Monitoring your system with sar is possible know how much nfile is beeing used.

If you have HP-UX 11.23 try use kcusage for monitor your nfile parameter.

Remember wich nfile will be used in many formulas for anothers kernel parameters.

Dont setup with high value the nfile parameter if your never will use.

Schimidt
Bill Hassell
Honored Contributor

Re: very large nfile setting

With the large number of server processes and user processes, nfile in the 1 million range may be reasonable. But that assumes that all 2000 files will be opened at the same time in each process. Without detailed code information, you could certainly set nfile to 100K to 500K and start monitoring with Glance (sar -v will work too but on a system this big, Glance is mandatory). As you see nfile usage nearing 70%-80% of maximum, look for a maintenance window to bump it up.

Other kernel params: maxfiles should probably be set to 2048 and maxfiles_lim to 4096 in anticipation of lots of open files per process. maxuprc may need to be bumped up to several thousand if all the processes are owned by one user.


Bill Hassell, sysadmin