Operating System - HP-UX
1832515 Members
4755 Online
110043 Solutions
New Discussion

Re: Kernel params too high?

 
SOLVED
Go to solution
Bob Davis
Occasional Advisor

Kernel params too high?

These params were suggested to resolve a file locking issue, but they look very high. What is the adverse effect of them being too high?

ninode 24576
nfile 24576
nproc 4096
nflocks 8192

Thanks for any advice...
8 REPLIES 8
MANOJ SRIVASTAVA
Honored Contributor
Solution

Re: Kernel params too high?

Hi Bob

These parameters are the outerlimits and dodnot ahve any overhead on the OS at all . Like the credit limit of the credit card , hence they are derived genrally from the maximum no. of users defined in the system . These can be increased w/o having any performance degradation of the system.


Manoj Srivastava
Rusty Sapper
Frequent Advisor

Re: Kernel params too high?

Hi,
There is a small increase in the amount of memory the Kernel uses as you increase these parms, but other than that there isn't any real problem.
However, sometimes, raising the kernel parms may just be curing a symptom of a deeper problem within an application. If it the problem keeps reoccuring, this may be the case.

HTH

Rusty
S.K. Chan
Honored Contributor

Re: Kernel params too high?

Lets go through 'em one by one ..
ninode 24576
=> The rule of thumb or general advise I got is .. if you want to increase this parameter do it in the increments of 10% at a time of your current value since it is quite difficult to track the utilization of this parameter. Excess of this can cause network timeout if you have HA cluster setup. Other than that you should be fine.
nfile 24576
==> No problem here.
nproc 4096
==> Typically you would set this 20% more than the maximum # of processes you normally observed to allow room for expansion. Increasing maxusers will automatically increase this parameter. No affect on system.
nflocks 8192
==> Very minimal usage of memory needed, so you're ok.
Sandip Ghosh
Honored Contributor

Re: Kernel params too high?

You may face some difficulties for increasing the ninode to the max value. Because if you set it high it will load your CPU. Actually ninode is used for the HFS filesystem only, not for vxfs. And I think you are having only one HFS file system in your System.

Look at the output of sar -v 2 20. See the inode consumption of your system and accordingly set your parameter.

I would suggest to increase the parameter as advised by the vendor. Then watch it for 1 week through glance or sar that how much is the practical requirement in the peak hours and set that as 80%, you set your parameters.

Sandip
Good Luck!!!
Ross Martin
Trusted Contributor

Re: Kernel params too high?

Bob,

ninode 24576
min = 14
max = memory limited
default = nproc+48+maxusers+(2*npty+(sever_node*18*num_cnodes)
ninode defines the max number of open inodes which can be in core -- it is the number of slots in the inode table -- that table is used as a cache memory -- so whatever value you set this to, the system will tend to max it out, because cache runs more efficiently when 100% filled. You may want to play with this value if the system complains about not enough inodes. (I typically set ninode and nfile to about the same value for basic functionality).


nfile 24576
min 14
max memory limited
default (16*(nproc+16+maxusers) /10+32+2*npty)
defines the max number of open files ant any one time -- be genrous with this number as the cost is low. Your current value may be a bit high but the system will error if there is not enough (usually with file table overflow messages).


nproc 4096
min 10
max memory limited
default 20+(8*maxusers)+ngcsp
specifies the max total number of processes tha can exist at the same time (too low a value will produce "proc:table is full" or "no more processes"
Make sure maxuprc <= (nproc-4).

nflocks 8192
min 2
max memory limited
default 200

gives the possible number of file/record locks in the system. Note that one file may have several locks and databases may need an exceptionally large number of locks.

At HP we don't recommend kernel parameter values for performance, but for basic functionality to get rid of any messages caused by application resources. If the system is compalining about file locks, I would only adjust nflocks first to remove the error. If another error pops ups, I would address that next.

Your application vendor or developer should be the expert on what to set these kernel values to for optimum performance within an HP environment.

Hope that helps,

Ross Martin
HP Response Center
Bill Hassell
Honored Contributor

Re: Kernel params too high?

Just a clarification: ninode at 24k is way, way too high and it became that large value because of a very obsolete formula. ninode is a cache of current and recently opened files, but ONLY for HFS filesystems. Since the default on all systems running 10.20 and higher has been /stand is the only HFS filesystem, the cache needs to be only a few hundred, perhaps a maximum of 1000.

So 24,000 is a big waste of kernel RAM. These parameters (nfile, nproc, ninode, etc) are not fences or limits like the maxdsiz or maxuprc values. They control the table size for a specific task in the kernel. 24,000 for ninode (versus 1000) will waste a lot of kbytes in the kernel area. Change ninode to a fixed value around 1000.


Bill Hassell, sysadmin
Todd McDaniel_1
Honored Contributor

Re: Kernel params too high?

I know this is a very old message and I am off topic and these parms are probably different now, but My question goes to what Bill said about memory and Ninodes...

One of my boxes, an N-class, has 32GB of memory. and ninodes is set at 18468.

My question is this: Your recommendation of 1000 is based on what exactly? just curious about your reasoning. I know it takes memory for each ninode you allow. What is the detriment of me having mine set the way it is?


In addition, my Superdome has 71GB memory and ninodes is set at 37k... and 48 CPUS at 750MHz. This is a rather large system so does ninode parm really matter in relation to memory when I have this much memory?

I know I am probably asking you to define the universe but can you offer any insight?

Thanks todd.
Unix, the other white meat.
Bill Hassell
Honored Contributor

Re: Kernel params too high?

ninode is a table of currently opened and recently opened HFS files. Indirectly, it's size influences DNLC and VxFS inode caches too (Dave Olker's book on NFS for HP-UX has a lot more details). The value for ninode is by default a formula based on very old assumptions left over from the days before VxFS filesystems, and it will be very large if you adjust maxusers (a non-kernel parameter that is found in several formulae in SAM).

Since /stand is the only HFS filesystem and has less than 100 files in it, ninode at 1000 is just fine. The reason that it seems to fill up (ie, sar -v 1, or Glance) is that old file inodes, perhaps from weeks ago) are still in the cache and the kernel has no metric to return the number of in-use or reuseable entries. So while there may be only 1 or 2 files that are open (and must have an inode entry), there will likely be a massive number of old inodes waiting to be used again. This table can consume a fair amount of kernel memory (megs) when set very large (tens of thousands).

The purpose of the inode cache is to bypass a directory search for a file that is already open or has been recently opened. Rather than search for the file, the inode cache is used to provide the location of the file from memory and bypass any disk activity. This inode information is in turn passed to the process that needs the file.


Bill Hassell, sysadmin