- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- how to monitor maxfiles on a server over a period ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 05:52 AM
07-18-2003 05:52 AM
I want to monitor the kernel parameter maxfiles say over the next 7 days, to see whether it exceeds its set limit or not. We have had an error in the past saying too many open files (Error 24, which suggests it means too many open files per process). Maxfiles is set to 2048 right now.
Is there a way I can do that ? possibly by a shell script ?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 05:55 AM
07-18-2003 05:55 AM
Re: how to monitor maxfiles on a server over a period of time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 05:56 AM
07-18-2003 05:56 AM
Re: how to monitor maxfiles on a server over a period of time.
Pete
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 06:01 AM
07-18-2003 06:01 AM
Re: how to monitor maxfiles on a server over a period of time.
Glance is best
sar -v (See man sar can help)
-v Report status of text, process, inode and file tables:
Steve Steel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 07:15 AM
07-18-2003 07:15 AM
Re: how to monitor maxfiles on a server over a period of time.
System Table report in glance does not report maxfiles, though it does report nfiles.
sar -v also does not report maxfiles.
Any other possibilities ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 07:27 AM
07-18-2003 07:27 AM
Re: how to monitor maxfiles on a server over a period of time.
will show you present limit and tuned limit.
If they are close by then you can increase
nfiles parameters.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-18-2003 07:33 AM
07-18-2003 07:33 AM
Re: how to monitor maxfiles on a server over a period of time.
sar -v reports nfiles, but i want maxfiles to be reported.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-19-2003 11:19 AM
07-19-2003 11:19 AM
Re: how to monitor maxfiles on a server over a period of time.
What you are asking is to identify how many files each process opens, but this isn't necessary. The program that crashes with errno 24 is the culprit. The first question is:
Why is this program trying to open 2,049 files? (seems like a programming error or poor design to me)
Now if this program can justifiably be allowed to open 3,000 files at the same time, then you need to change 2 kernel parameters:
maxfiles = 100
maxfiles_lim = 3000
Reducing maxfiles to 100 is a safety limit for bad scripts and programs that should never open more than 100 files at the same time. The second limit (maxfiles_lim) is the hard limit for any program. maxfiles is a default value but can be programatically increased with the setrlimit system call, or increased with the ulimit command (POSIX and csh shell only, not ksh). So for the truly strange program that needs 3,000 files open at the same time, change the current environment with ulimit and then run the program as in:
$ ulimit -Sn 3000
$ /opt/application/bin/open_lots_of_files
I recommend making maxfiles to default and not letting every user and program on the system mess around with massive numbers of files. The reason is that if all the file descriptors in the system are used, NO ONE CAN LOGIN including root! This has been a denial of service attack in Unix in the past. The -S option makes this a soft limit so for the current shell, the value may be lowered and raised as needed. Without -S, the value becomes the permanent upper limit for this shell. Note that if 20 copies of this program are run at the same time, you'll need nfile set to more than 60,000! And you'll also see a huge increase in system overhead while thousands of files or located or created in each process.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-21-2003 01:07 AM
07-21-2003 01:07 AM
Re: how to monitor maxfiles on a server over a period of time.
By default it displays the top 20 processes every 20 seconds. You can change the number of processes reported and the interval with command line arguments.
To compile:
$ cc -o filecounts filecounts.c
And to report the top 2 processes every 10 seconds (data output is number of files, process id and command):
$ ./filecounts 2 10
================= Mon Jul 21 10:02:54 2003 =================
105 23717 /opt/bea/jdk131/jre/bin/../bin/PA_RISC2.0/native_threads/java -
47 1631 pmd -SOV_EVENT;t
================= Mon Jul 21 10:03:04 2003 =================
105 23717 /opt/bea/jdk131/jre/bin/../bin/PA_RISC2.0/native_threads/java -
47 1631 pmd -SOV_EVENT;t
Regards,
Steve
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-21-2003 11:58 AM
07-21-2003 11:58 AM
SolutionModify it to watch the offending process by name...and run as follows:
glance -j 60 -syntax
- This script will update every 60 seconds...and will not end until you
I've run into the same situation with MeasureWare's TTD process needing more than our configured MAXFILES value....and am looking at a fix using the Bill's "ulimit" solution above. (As I don't want all the processes the ability to go over the MAXFILES)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2004 04:53 AM
06-21-2004 04:53 AM
Re: how to monitor maxfiles on a server over a period of time.
cc -D_PSTAT64 -o countfiles countfiles.c
If you don't it will compile without complaint but the object produces only timestampes. With the above compile flag it works as Steve stated.
P.S. Thanks to Steve for the source.