1833752 Members
2722 Online
110063 Solutions
New Discussion

sar report on inode.

 
brian_31
Super Advisor

sar report on inode.

Hi All:

I am doing a sar -v on my production box and it shows the following
proc-sz ov inod-sz ov file-sz ov
1083/4420 0 9114/9114 0 8127/20010 0

After sometime the inode-sz got reduced to 5000. Should this be of concern? Actually the machine is having 20 Gig of memory with a dbc_max value of 20 and min value of 5. We are going to change it to 3 and 2 respectively.Any suggestions are welcome.

Thanks
Brian.
17 REPLIES 17
MANOJ SRIVASTAVA
Honored Contributor

Re: sar report on inode.

Hi Brian


Yes definately as this would staurate the ninode and if there are no free inodes no one will be able to login . So increase it by atleast 50 % and monitor it so that you can fine tune it . More at :

http://docs.hp.com/hpux/onlinedocs/os/KCparams.OverviewAll.html


Manoj Srivastava
Uday_S_Ankolekar
Honored Contributor

Re: sar report on inode.

Hello

It depends, I would change dbc_max to 10 and monitor the performance. Regarding the sar output size it's normal. Only when you need attention is when it's overflow or reaching above 75 % .
Also check maxusers, nfile and nproc parameters.


Goodluck,

-USA..

Good Luck..
brian_31
Super Advisor

Re: sar report on inode.

Hi:

inod-sz at a point already reached 100% although after some time it came back to 60%. I am confused here.

Brian.
A. Clay Stephenson
Acclaimed Contributor

Re: sar report on inode.

Hi Brian:

I suspect that your ninode is WAY TOO BIG!! I'm really not insane. I am willing to bet that you could set it between 700 and 1000 and still be a happy camper. You see, this value refers to cached open (or recently open) HFS files. If the only hfs filesystem you have have is /stand and the others are vxfs, you can radically reduce ninode and save memory.

My other comment is that if you are only going to allow dynamic buffer cache to float between 2 & 3%, why not use bufpages to hard-set the buffer cache (leave nbuf at 0) and avoid that overhead as well?


If it ain't broke, I can fix that.
Uday_S_Ankolekar
Honored Contributor

Re: sar report on inode.

Then you need to increase nfile, inorder to increase nfile I would increase maxuser parameter, since nfile formula depends on the maxuser.


-USA..
Good Luck..
S.K. Chan
Honored Contributor

Re: sar report on inode.

The doc talks about "sar -v" output and it also talks about how inode table works on 10.x vs 11.x. Hope it helps a little ..

http://us-support3.external.hp.com/emse/bin/doc.pl/sid=03f49a091297db87f6/distrib_redir=2+1022705323|*?rn=100&searchcategory=ALL&todo=search&x=13&searchtype=SEARCH_TECH_DOCS&y=13&searchcriteria=allwords&searchtext=UCMDKBRC00009024&presort=rank
MANOJ SRIVASTAVA
Honored Contributor

Re: sar report on inode.

Brian

This is a dynamic value , it will increse and decrease depending on the current utilization
of the free inodes.So it is changing , a better way is to do sar -v 2 10 for differnt time and store it in a log file and then change it .

Manoj Srivastava
Sandip Ghosh
Honored Contributor

Re: sar report on inode.

Hi Brian,

The inode size , which is shown by sar and which is set in the Kernel, are the inodes used for the HFS filesystem. In your system I believe the only hfs filesystem is /stand. It doesn't need more than 5000 inode. The setting in your Server is too high as decribed by Mr. Clay. Otherwise you can keep the present setting . It will not harm you. The inode allocation on the vxfs filesystem is changing dynamically.

Sandip
Good Luck!!!
brian_31
Super Advisor

Re: sar report on inode.

Hi:

Thanks for the response. It is true i have only /stand as hfs. Also when i do a sar -b the rcache is 100 and the wcache is 40 (on an average). May be it 'coz of 20% dbc_max and 5% dbc_min. Now if i have 20 Gig of memory can someone explain how the dbc would come into play ie will the max value or min value would take effect first. Also going by clay's recommendation what would be a good value for nbuf and bufpages? Thanks once again for all your time.

Regards
Brian.
brian_31
Super Advisor

Re: sar report on inode.

Hi All:

Forgot to mention that we use xemulator for launching the xmotif Applications(Ten) thru PC. Environment 11.0, 64 bit on Nclass.

Thanks
Brian
James R. Ferguson
Acclaimed Contributor

Re: sar report on inode.

Hi Brian:

How much buffer cache you want to allocate is highly dependent upon your environment. First, if you are running a database like Oracle which handles its own file buffers, a large file system buffer cache is imposing additional "double-buffering" and should be avoided.

I prefer to enable dynamic buffer caching by setting values for 'dbc_min_pct' and 'dbc_max_pct' and setting both 'nbuf' and 'bufpages' to zero. This allows the buffer cache to oscillate based on memory pressure between the 'dbc_max_pct' and 'dbc_min_pct' boundries.

You might start with min and max values of 2% and 5% given the large amount of memory you have.

Use 'glance' to monitor your system. The I/O metrics (comparing physical vs. logical I/O) are a good guide to the benefit you are or are not seeing by tinkering with the buffer cache.

For a bit more information on the kernel parameters discussed, see:

http://docs.hp.com/hpux/onlinedocs/os/KCparams.OverviewAll.html

Regards!

...JRF...
Thierry Poels_1
Honored Contributor

Re: sar report on inode.

Hi,

check out Bill Hassell's comment on the ninode parameter:

http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x2767dfe5920fd5118fef0090279cd0f9,00.html


BTW: a dbc_max_pct of 20% with 20G of memory seems awfully big.

regards,
Thierry
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Bill Hassell
Honored Contributor

Re: sar report on inode.

Leave nbuf and bufpages at zero. If you read the SAM help-on-context pages for these parameters, you'll get a headache trying to figure out what it all means. By leaving them both zero, the system will use the Dynamic Buffer Cache algorithm. Note that the efficiency of the DBC has dramatically improved at 11.11 to the point where the DBC size can be reduced from max to min in just a second or two.

With sar -b, read cache hit rates over 90% are very good, but with a dbc_max of 20% (or 4000 megs in your 20Gb RAM), there will be a lot of extra overhead in managing such a large cache (read: system overhead). It's better to leave the maximum cache size at about 200 to 700 megs (200 for slow machines with heavy write operations, 700 megs for fast machines with more reads than writes) or dbc_max=2% to 3%.


Bill Hassell, sysadmin
brian_31
Super Advisor

Re: sar report on inode.

Hi:

Okay. I am almost done. What i do not understand is how the max and min values come into play. does it put the max value upfront and then toggles between the max and min values. I have an idea that i am going to leave the bufpages and nbuf to zero and have the max at 3% and min at 1%. Is it OKAY?

Thanks
Brian.
Thierry Poels_1
Honored Contributor

Re: sar report on inode.


dbc_min_pct and dbc_max_pct define the minimum and maximum buffer cache. The system will gradually decrease the buffer when it runs completely out of memory, and will reassign memory back to the buffer cache if enough free memory is & stays available.

regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Thierry Poels_1
Honored Contributor

Re: sar report on inode.

1 & 3% for the dynamic buffer cache seem much more realistic with 20G of RAM.

regards,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Carlos Fernandez Riera
Honored Contributor

Re: sar report on inode.


The configuration of ninodes is dependent on de OS version. While in 10.20 ninode should not be a high value due to deadlocks, in 11.00 if you are using HFS filesystems it can be configured as 90000 by example.

I read this info in some performance docs it hp WEB.

Visit http://h21007.www2.hp.com/dspp/topic/topic_TopicDetailPage_IDX/0,1711,10313,00.html


and read under the section `tecnical papers`.


I apologize for typos, my keyboard is beginig to die ( my fingers are very mad for a long time).


About dbc_max_pct, if i were 20GB i set both on 1 ( 200MB).

Carlos
unsupported