Operating System - HP-UX
1835309 Members
2194 Online
110078 Solutions
New Discussion

increasing maxfiles, what should be considered ?

 
SOLVED
Go to solution
De Gucht Helga
Advisor

increasing maxfiles, what should be considered ?

When increasing the maxfiles what should be taken into consideration, for instance are there any other parameters ( except from maxfiles_lim and nfile ) to take along?
How is it possible that a process ( an export, which opens 'virtual' pages ) hangs when doing this export when maxfiles is set to 60 (default), works ok when maxfiles set to 200 and hangs again when increasing maxfiles to 1024 ? What should I look at ? Please advise.
9 REPLIES 9
James R. Ferguson
Acclaimed Contributor

Re: increasing maxfiles, what should be considered ?

Hi:

The 'maxfiles' kernel parameter is the maxiumum number of files a process can open. It is referred to as a "soft limit". A process can override the kernel value but only up to the "hard limit" of 'maxfiles_lim'. 'maxfiles_lim' has a maximum value equal to 'nfile' (the number of file *system-wide* that can be open simultaneously). Setting 'maxfiles_lim' higher than 'ninode' is also meaningless, since 'ninode' governs the maximum number of open inodes that can be in memory at a time.

Thus, I would make sure that the interrelations among the values of all four of these parameters are appropriate. If not, this would explain your observation.

Regards!

...JRF...


Rita C Workman
Honored Contributor
Solution

Re: increasing maxfiles, what should be considered ?

Well the one parm you missed is nflocks.
Remember when you increase how many files can be open by a process you are taking up a small amount of memory..and some apps require simultaneous access to the same file, but at certain points the file becomes locked to avoid data corruption.

So when you increase how many files you have open...but don't increase how many can be locked--the system can hang waiting for permission for a process to lock a file.

I don't know that this is what is happening in your case...but your question was "what should be considered?"
...and I thought you might want to consider this point.

I am also noting two threads...I highly recommend you read for maxfiles...and the second thread that explains (far better than I could) what I mentioned..

Maxfiles:
http://docs.hp.com//hpux/onlinedocs/os/KCparam.Maxfiles.html

READ THIS:
http://docs.hp.com//hpux/onlinedocs/os/KCparamTut.OpenLockedFiles.html

Rgrds,
Rit
harry d brown jr
Honored Contributor

Re: increasing maxfiles, what should be considered ?

What are your other kernel parameters set to, like nfiles, ninodes?

Also, what kind of process is serving up your "virtual pages" via NFS?

live free or die
harry
Live Free or Die
De Gucht Helga
Advisor

Re: increasing maxfiles, what should be considered ?

from the kmtune :

maxfiles = 1024
maxfiles_lim = 2048
nfile (calculated) = 2853
ninode ( calculated ) = 1620
nflocks = 200

conclusion for now : increase nflocks
( to 4096 )
James R. Ferguson
Acclaimed Contributor

Re: increasing maxfiles, what should be considered ?

Hi:

I'd suggest running 'glance' and looking at the 't'able metrics. You can actually see the dynamic values and high-water marks of 'nfile' and 'nflocks'. 'sar -v' is also useful since it reports the current and maxiumum size of the inode table.

Regards!

...JRF...
MANOJ SRIVASTAVA
Honored Contributor

Re: increasing maxfiles, what should be considered ?

Hi De

That is essentially what is to be done , do a sar -v for some period atleast on time when the system is busy the most , fire it by cron and save the o/p to a log file , it will give you a feel of the saturations of theses 3 parameters .


Manoj Srivastava
De Gucht Helga
Advisor

Re: increasing maxfiles, what should be considered ?

We have increased nflocks to 1024. HP have said that any more would have a
serious impact on performance.

We are still have the error even on the simple export i.e. shared workflow
without the 'Add Related' button being used i.e. 3 forms and 8 Active Links.

Given that the problem still persists after all of the kernel tuning and
reduction of list and fast server threads I am still convinced that there is
something more fundamentally wrong here.

Before MPSO we had no problem exporting shared workflow. With MPSO we are
having a problem exporting shared workflow.

The process having the problem is the arserverd process which represents the main part of ARS ( Remedy ). It handles all interactions between clients and the database, making all access to the system dependant on this process.

MPSO is an option to ARS, multithreaded server.
and is designed to distribute the load so to have all server functions done by multiple threads doing specific functions.
All threads within the process share the network resources.

The arserverd process is now hanging ( shows running but is in fact hanging ) upon starting the export command, the export open as far as I have understood 'virtual' pages upon doing the export.

This problem does not occur when not having the MPSO option.

I am looking to find how this multithread option can be causing this trouble.

Rita C Workman
Honored Contributor

Re: increasing maxfiles, what should be considered ?

Did some more checking on your issue and found some 'fascinating' reading.

Basically, your issue involves more than just the parms we were discussing, but may also be affected by the shared memory segment size and how much memory you have. Since the purpose is to have all servers share the memory used to cache information from the database...thus enabling the users to share the one cache..but it mentions that when an administrative change is made to the structure of the database a new 'cache' must be created and updated and a new copy of the cache re-loaded. So basically, if a few admin changes were done you could have a few copies of cache hanging out there waiting to create the new 'albeit a better name 'latest' copy of cache' created and eloaded. Whew...!!
So the question becomes...should this scenario happen..(and maybe that is what's happening) ..do you have enough memory (and consider that mem segment size) to account for all this...

Like I said...reading this url was enlightening...I think it may help you.

http://www.remedy.com/customers/rxpress/articles/mpso_shared_cache.htm

Rgrds,
Rit
De Gucht Helga
Advisor

Re: increasing maxfiles, what should be considered ?