- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- very large nfile setting
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 12:16 AM
03-14-2006 12:16 AM
Thanks,
Tim
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 12:24 AM
03-14-2006 12:24 AM
Re: very large nfile setting
I have not done that. There are probably limits .
nfile to 2 million means that 2 million file handles would be open at once. It would have to be a very large, multi user system with many, many users or lots of database instances running to need such a high setting.
Any reason why you are considering such a setting?
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 12:26 AM
03-14-2006 12:26 AM
Re: very large nfile setting
The value you need would be entirely dependent upon your specific environment. For systems with more than 1GB of memory, the default is 64K. The maxium value is constrained to a 32-bit integer. The memory needed by the kernel to support the 'nfile' table is quite small, so the overhead is also very small.
The value of 'nfile' must be nfile must be equal to or greater than twice the value of 'maxfiles_lim' (the per process open file limit).
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 12:34 AM
03-14-2006 12:34 AM
Re: very large nfile setting
I could only surmise nfile environments set to 2M+ will probably be either of the following:
1.) An even bigger DB server doing OLTP
2.) A really big server acting as fileserver (NFS or SAMBA)
3.) A huge web or application server capable of serving many many users and connections at the same time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 03:40 AM
03-14-2006 03:40 AM
SolutionThe fact that HP-UX allows millions of files to be open at the same time does NOT mean it is a good idea. There are two questions when these extreme numbers are encountered: are the millions of files being opened expected behavior, or the result of badly coded runaway programs? And if the millions of files are expected, is the price of the design worth the administratives costs (including program maintenance/debug, slow backup speeds due to millions of small files, etc)?
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 04:48 AM
03-14-2006 04:48 AM
Re: very large nfile setting
The load is 3 PostgreSQL database environments/clusters, each with ~180 server processes, each set of 180 processes serving a set of ~2000 files or relations represented by ~2000 files
Certainly OLTP with a max user count of 900 for all three environments, 300 for each.
Comments?
Thanks,
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 04:53 AM
03-14-2006 04:53 AM
Re: very large nfile setting
The reason why we set kernel parameters dealing with how many processes can run, no file handles, filelocks, etc. is to lessen the potential for a system to go wild - or if it ever goes wild, to increase the chance that an admin can still get nto the system and look around.
I suggest you use "lsof" to figure out and study what should be your normal nfile setting.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 07:39 AM
03-14-2006 07:39 AM
Re: very large nfile setting
Monitoring your system with sar is possible know how much nfile is beeing used.
If you have HP-UX 11.23 try use kcusage for monitor your nfile parameter.
Remember wich nfile will be used in many formulas for anothers kernel parameters.
Dont setup with high value the nfile parameter if your never will use.
Schimidt
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2006 08:12 AM
03-14-2006 08:12 AM
Re: very large nfile setting
Other kernel params: maxfiles should probably be set to 2048 and maxfiles_lim to 4096 in anticipation of lots of open files per process. maxuprc may need to be bumped up to several thousand if all the processes are owned by one user.
Bill Hassell, sysadmin