Operating System - HP-UX
1833310 Members
2996 Online
110051 Solutions
New Discussion

kernel params set too large

 
Jakes Louw_1
Frequent Advisor

kernel params set too large

OK, clever guys....
I need details on the negative impact of tuning certain kernel params too LARGE, especially the impact on syscalls that need to scan all these frigging kernel tables. The params I'm specifically looking at reducing are:
NPROC
NFILE
SHMMNI
MSGMNI
SEMMNI
NFLOCKS
NPTY/NSTRPTY
NINODE
MSGMAP
SEMMAP

The current settings were "thumb-sucked" by a local HP SE that might have been a little intimidated by the SuperDome. My theory is that certain syscalls like READ, WRITE, OPEN, etc, are spending cumulative time racking through these tables. At several gazillion I/Os per day, this all adds up....
7 REPLIES 7
James R. Ferguson
Acclaimed Contributor

Re: kernel params set too large

Hi:

I'd have a look at the Tunable Kernel Paramters for 11i documentation. This gives one of the best guidelines I've seen for values as well as guidelines. Specifically, see the Tunable Reference Manpages section:

http://docs.hp.com/hpux/onlinedocs/TKP-90202/TKP-90202.html

Regards!

...JRF...
Jakes Louw_1
Frequent Advisor

Re: kernel params set too large

Thanks, James. You deserve a coupla points for the link!
Dietmar Konermann
Honored Contributor

Re: kernel params set too large

Most of the tables you are referring to are either directly addresses or accessed using some kind of hashing. So don't expect too much from your tuning... OK, you save memory.

Two prominent exceptions are ninode (unless you restrict the DNLC uing the ncsize tunable) and dbc_max_pct (important, although not listed by you).

Best regards...
Dietmar.
"Logic is the beginning of wisdom; not the end." -- Spock (Star Trek VI: The Undiscovered Country)
Shannon Petry
Honored Contributor

Re: kernel params set too large

Your theory that these items generate I/O is false. The kernel tables sit in memory, and are not used unless they are referenced. While some kernel params can have a negative impact, most do nothig but make the kernel a little larger.

Think about your logic for a minute or 2. How can increasing the ninode param create open(), read(), and write() syscalls? This just means that when a file is created, ninode is looked at to ensure the max is not reached.

If the ninode param is not reached, I/O will then occur to make the inode(and other activity). This occurs whether or not you have a large or small ninode parameter set.

NTPY is another easy excmple. This controls how many open connections you could possibly have. As long as the limit is not reached connections can be established. The I/O for each session will occur regardless of what your NTPY param is set too.

Remember that alot (or most) of the kernel parameters are there as safety precautions. Example: ninode controls how many files you can have in a directory, hopefully catching a bug in code looping and blowing out file systems. Do you really think HP-UX really cares if you have a million files or 1? Not really.


Regards,
Shannon
Microsoft. When do you want a virus today?
Dietmar Konermann
Honored Contributor

Re: kernel params set too large

Shannon,

reading the original question, I think we are not talking about additional I/Os? I think Jakes is talking about additional kernel overhead while processing syscalls.

BTW, ninode tunes the size of the HFS(!) inode cache (and inirectly the size of the DNLC, the directoy name lookup cache). It does not limit the number of files in a directory or similar.

Best regards...
Dietmar.
"Logic is the beginning of wisdom; not the end." -- Spock (Star Trek VI: The Undiscovered Country)
Bill Hassell
Honored Contributor

Re: kernel params set too large

As mentioned, most of these parameters are table sizes and the tables are not searched serially. In fact, setting nfile to 5million does not affect the system's throughput! All it does is to reserve space for 5million entries. And if 4million files are opened at the same time, the file table still performs it's job at full speed. Only tools like lsof may run slowly since it is looking for specific files associated with various processes.

And once a file is open, the file control block is kept local to the application so that read/write tasks do not need the kernel structures.

You'll need to look further for resons that the system seems to be running slowly. Start with the load: is the system overhead in the 5-15% range or is it in the 50-75% range? High system overhead is a sign that applications are asking opsystem to do things inefficiently.

Is the compute time excessive (80-100%) on just ine processor? In that case, a simple (but bad) shell script can saturate one processor. Or an application can do the same thing because it is not threaded (or configured for multi-processors).

Improving performance requires an understanding of what the applications are doing and whether you have any control over these tasks. Rarely can you you see a dramatic improvement in performance by tweaking the kernel. It's better to look at the apps and the calls that they make to the opsystem first.


Bill Hassell, sysadmin
Shannon Petry
Honored Contributor

Re: kernel params set too large

Deitmer,

I probably did not do a good job explaining but in essense most of the kernel params listed are hard limits which do not have impact on performance nor kernel size.

The I/O does not occur by checking the limits, but by the processes past the point of the limit succeeding.

Does that make better sense?

Shannon
Microsoft. When do you want a virus today?