Showing results for 
Search instead for 
Did you mean: 

sar -v and ninode results

Go to solution
Frequent Advisor

sar -v and ninode results

Question about how '...sar -v...' is reporting '...ninode...'. Any concerns for numbers like 9284/9284 and 5184/5184 ? In five out of sixteen servers the maximum has been reached.

I ask because of 16 HP servers the others display values like 788/9284, 3400/2100910, and 3245/6188, etc. Which implies that the currently utilized number of inodes never reaches the maximum.

Our SUN servers on the other hand display values, ( also for vxfs file systems ) 2187/2187, 1998/1998, 2002/2002, and so on.

This is a reporting of a the dynamic value that I'm use too, ( ninode is dynamic in vxfs ), and I'm verifying that that HP-UX versions of '...sar -v...' are reporting a messed up value of ninode and can be ignored.

Comments? Concerns?

In the long run I'm trying to track down the reason for resource waits at the disk level, tcp disconnects and web server hangs.

Thanks in advanced.

Yours, Mine and Yours
Brian Bergstrand
Honored Contributor

Re: sar -v and ninode results

First of all, sar -v on UX only reports on HFS filesystems. So if you have all VXFS (excepting /stand of course), then the numbers will be really low.

ninode on UX is just the # of entries in the kernel for HFS inode caching. So if it's maxed, that means that the max # of HFS inodes have been cached. This can happen very quickly on a server that has any HFS fs. For the servers that aren't maxed, I would guess they are all VXFS or the HFS fs's are not used much.

Bottom line, is that there is nothing to worry about with those numbers.


BTW, vx_ninodes (on 11i +patch) is what controls the # of cache entries for VXFS fs's. But, sar does not report on this AFAIK.
T G Manikandan
Honored Contributor

Re: sar -v and ninode results

You should not worry about that.
Inode values shown with sar -v output are not like the hard limit values of file-sz and proc-sz.
So when this value is reached a new process could not be handled on the server.

The value of inode shows like
for ex. if 9284/9284

The value of the left of "/" is the number of open inodes in the inode table cache and on the right is the total number of inodes that can be open in the inode cache table.
This is configured using a kernel parameter ninode.
It is recommended that this value is at the maximum so that there is good performance.

Frequent Advisor

Re: sar -v and ninode results


You said, "'s recommended that the open ninode value be at max for good performance...".

If ninode for vxfs is dynamic and not static then won't good performance always be the case? I can only see poor performance as an issue if static numbers like 600/201455 are being reported. For example, only 3% of the ninode table is being used. But vxfs is not a fixed table. Its dymanic.

So good performance is always present. No?
Yours, Mine and Yours
Brian Bergstrand
Honored Contributor

Re: sar -v and ninode results

The vxfs cache is dynamic by default, but this is a very bad setting on big memory systems. It can lead to 50% or more of your memory being used for cache entries. This can cause major performance problems. vx_ninode should be manually set to about 90% of ninode.

Also, as I said before, sar does not report on VXFS (vx_ninode), only HFS (ninode). So those numbers are only useful if your systems is mostly HFS.

Frequent Advisor

Re: sar -v and ninode results


Like most I've only got one HFS file system and I can't believe /stand is using 9284 out of 9284 inodes. So just how reliable is this sar -v number? Are some vxfs values are being added in?

If you are saying that sar -v is only accurate for servers with HFS file systems and no VXFS file systems then I would agree. But this is 11.00 and not 9.0 or earlier.

Ultimately I'm looking for a relationship to our problems and a maxed out ninode report, and it appears that there is none.
Yours, Mine and Yours
Frequent Advisor

Re: sar -v and ninode results

Still looking for that 10 pointer reply. No takers? Oh well.
Yours, Mine and Yours
Steven E. Protter
Exalted Contributor

Re: sar -v and ninode results

A rabbit? Not sure.

You probably want to collect more data. I'm attaching a script that does that and allows you to set the time period for collection. It runs background.

You should know that leaving the kernel parameter vx_ninode as 0(zero) can vastly distort your system. This lets the system decide what level to set and change it when it wants to. This is a vast waist of resources and can cause nasty results.

I'm pasting in a link that helps with basic performance tuning. It covers this and other topics. Its a good read, and I know the author, he's a wiz.
Doc ID: UPERFKBAN00000726

Steven E Protter
Owner of ISN Corporation
Bryan D. Quinn
Respected Contributor

Re: sar -v and ninode results

I don't expect this to get a bunny, but I thought I might throw out a similar experience we are encountering. We have a couple of servers that are all application servers for our Oracle/SAP system. In the past couple of weeks they have started running pretty thin on memory and in some situations having minor performance problems on one or two of the boxes. I checked kernel parameters between the boxes and they look pretty much the same across the board, what I would suspect since they are all doing the same thing and pretty much are identical. Anyways, I read that there can be an issue with not setting the vx_ninode and letting the vxfs inode cache change dynamically. Also, setting the vx_noifree to 1 (or turning it ON so to speak). Apparently what this does is instead of allowing entries in the cache to be freed up, it allows the cache table to fill up and then reuses the entries as needed. Apparently hindering fragmentation.
So, to test this I took our TestPRD box (which was having so similar issues) and changed our vx_ninode setting to 90% of our ninode value and then set vx_noifree to 1. It has been running about a week with these settings and my memory utilization has stabilized. Before I made these changes memory utilization was higher and it did not grow and shrink, it only grew. It acted like a memory leak. It would eventually get so bad that it would start swapping and then of course vhand kicked in and eat up cpu. So I think that is what is happening on our app servers, but they get rebooted every weekend and don't have a chance to get that bad.
Anyways, I just thought I would drop this in your ear. Maybe you will see some similar issues to your situation.

Hope this helps!
Sridhar Bhaskarla
Honored Contributor

Re: sar -v and ninode results


As pointed before, ninode is only for HFS filesystem and there is absolutely no need to have it more than 500-1000 if you have no other hfs filesystem than /stand. You can simply delete the formula and hardcode the number there. The formula is no more valid with the introduction of vxfs's vx_inodes.

You are right that sar is reporting a wrong value as it cannot keep track of closed files where those inodes would be usable again.

On the other hand vx_ninode is dynamic if set to 0 for vxfs filesystems whose value is not reflected in sar -v output. You can get it by using

echo vxfs_ninode/D | adb -k /stand/vmunix /dev/mem


You may be disappointed if you fail, but you are doomed if you don't try
Dietmar Konermann
Honored Contributor

Re: sar -v and ninode results


let me try to shed some light onto this.

The inod-sz metric that sar -v shows is somewhat inconsistent.

The maximum value is take from stat_getstatic's pst_max_ninode metric. This is equal to ninode, which is clearly the size of the HFS-only inode cache.

The current value is pstat_getdynamic's psd_activeinodes metric. This is the number of active inodes for BOTH, HFS and VxFS! The printed value is resticted to be not larger than pst_max_ninode. :) So it is clear, that you see a current==max situation on most of the systems.

In summary, you can forget this sar report at all. OK, it may be useful if you want to check ninode. :)

Since only /stand is usually HFS, there's no need to increase it at all. I usually set it fixed to say 1024.

The VxFS inode cache size is dynamic. However, the maximum size can be restricted using the vx_ninode tunable. By default this is 0, which means that the maximum is derived from physmem at boot time. For large memory systems it is often a good idea to restrict vx_ninode to say 30000.

Best regards...

"Logic is the beginning of wisdom; not the end." -- Spock (Star Trek VI: The Undiscovered Country)