Operating System - Linux
1752278 Members
4610 Online
108786 Solutions
New Discussion юеВ

Re: /proc/sys/fs/file-max - is there a need to increase ?

 
SOLVED
Go to solution
Maaz
Valued Contributor

/proc/sys/fs/file-max - is there a need to increase ?

OS: SLES 10 SP2
kernel: 2.6.16.60-0.21-smp

# cat /etc/security/limits.conf

* - nofile 40960

# ulimit -n
40960

# cat /proc/sys/fs/file-max
202800

But as per http://prefetch.net/blog/index.php/2009/07/31/increasing-the-number-of-available-file-des
criptors-on-centos-and-fedora-linux-servers/


Increasing the maximum number of file descriptors is a two part process. First, you need to make sure the kernelтАЩs file-max tunable is set sufficiently. This value controls the number of files that can be open system-side, and is controlled through the file-max proc setting:

$ cat /proc/sys/fs/file-max
366207

$ echo 512000 > /proc/sys/fs/file-max

$ cat /proc/sys/fs/file-max
512000



so should I also need to increase the value of '/proc/sys/fs/file-max' from defaults(202800) ?

Regards
Maaz
3 REPLIES 3
Matti_Kurkela
Honored Contributor
Solution

Re: /proc/sys/fs/file-max - is there a need to increase ?

Remember that the limits set by "ulimit" are per process (or per user, when applicable), but the /proc/sys/fs/file-max is the maximum total number of open files for all users.

If you are increasing the ulimit values because there is just one user (which may be an application, like "oracle" or "weblogic") which requires a lot of open files, you don't need to increase /proc/sys/fs/file-max now.

202800 / 40960 = about 4.95 so your system can currently handle 4 processes simultaneously opening the maximum number of files allowed to them. If a 5th process attempts to do the same, the system-wide limit will be reached before the per-process limit stops the process.

When the system-wide limit is reached, no process on the system can open any more files. This may prevent important system daemons like syslogd from working. It *certainly* prevents any new network logins until some files are closed. Unless the resource-hungry processes can be stopped, it may be very difficult for the sysadmin to access the system to fix the problem, unless the system is intentionally crashed and rebooted. In short, reaching the system-wide resource limit can be a big problem.

So, if there is only one or two users/processes that actually need the larger nofile limit, you're probably safe. But modern systems usually have multiple gigabytes of memory, and increasing the file-max value is "cheap". Increasing the file-max to 2x or 4x of its present value should consume a negligible amount of memory on a modern system. So you may wish to increase it anyway, just to be safe.

file-max > (max number of users * nofile)
=> you're guaranteed to be safe.

But in some systems, if you use the formula above, the required file-max can be infeasibly large. In that case, you should consider giving larger nofile limits only to those users that really need them.

MK
MK
Steven E. Protter
Exalted Contributor

Re: /proc/sys/fs/file-max - is there a need to increase ?

Shalom,

temporarily, until next boot, just change it with an echo command.

Permanently change the parameter in sysctl.conf

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Maaz
Valued Contributor

Re: /proc/sys/fs/file-max - is there a need to increase ?

Hi Thanks Matti Kurkela for such a nice help

>..../proc/sys/fs/file-max is the maximum
>total number of open files for all users.

>If you are increasing the ulimit values
>because there is just one user (which may be
>an application, like "oracle" or "weblogic")
>which requires a lot of open files, you don't
>need to increase /proc/sys/fs/file-max now.

OK and Nice.

by the ways whats recommended 'ulimit -n' value for oracle 11g and 10g R2 ?

Regards
Maaz