1847245 Members
2674 Online
110263 Solutions
New Discussion

maxfiles_lim > 2048

 
vittal_1
New Member

maxfiles_lim > 2048

Hi all,

I would like to understand the meaning of setting a high value (> 2048 )for maxfiles_lim. As per maxfiles_lim definition, I understood that its the absolute hardlimit for a process and a process can be allowed only a max of 2048 then, howcome seeting a value >2048 will help.
Also I would like to understand the impact of following confing
maxfiles_lim = 2048
maxfile = 10240
Will this cause a process to stop with "too many openfiles" error, becoz the maxfiles is set higher than maxfiles_lim.

Thanks in advance
Regards
Vittal
3 REPLIES 3
RAC_1
Honored Contributor

Re: maxfiles_lim > 2048

The lower value wins.
The idea here is to set hard limit (maxfiles_lim) greater than soft limit(maxfiles)
There is no substitute to HARDWORK
Kent Ostby
Honored Contributor

Re: maxfiles_lim > 2048

vittal --

The system checks both limits. So when either limit is passed, it will generate an error message.

Best regards,

Oz
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
Bill Hassell
Honored Contributor

Re: maxfiles_lim > 2048

These are bad programming limits, that is, when a bad program goes crazy, these limits cause the program to abort when it tries to open more files than the limit. It makes no sense at all to raise maxfiles higher than maxfiles_lim. maxfiles is the typical value for the majority of programs. Most programs open just less than a dozen or so files at the same time. Now maxfiles is a starting point which can be overidden with either ulimit -n (only available in the HP POSIX shell) or a program can raise the limit for itself by calling setrlimit(). The maximum number of files in a single program cannot be increased beyond maxfiles_lim.

There is nothing magic about 2048 except that it has been the default limit for 20 years. Several manufacturers of applications make a blind recommendation to make maxfiles=mafiles_lim and set them both to 2048, thus defeating the whole purpose of the dual limit. If a special program needs to open 3500 files at the same time, I would first question the sanity of such a design, but then I would only increase maxfiles_lim to something slightly larger, perhaps 3600. Leave maxfiles at 60 or so.

If the application fails, then the programmer lacks some very important training about Unix programming techniques--I'd find another vendor.

Now as to your specific error message:

"too many openfiles"

This can be due to maxfiles (where the programmer did not call setrlimit() to set the value higher), or it could be that nfile (all files in the entire system) has been exceeded. Use sar -v 1 to monitor the open file count. If you have 10 programs that all open 2000 files each, nfile must be bigger than 20,000. nfile might have to be set to 50,000, maybe 250,000. It all depends on your application requirements.

If nfile is large enough, then use ulimit -n to set the program's startup environment (a crutch for the programmer) but leave maxfiles at a low value.


Bill Hassell, sysadmin