Workload and Resource Management
Showing results for 
Search instead for 
Do you mean 

ninode usage on 11.0

Occasional Advisor

ninode usage on 11.0


Could somebody confirm whether a high NINODE usage is indicative of problems? I have opinions from others here that high usage is linked to high CPU utilisation, and that NINODE should be increased. My understanding (and also that of some local HP consultants) is that NINODE is simply a cache that fills and frees as it goes. Are there any recommendations out there on how NINODE *should* be configured?

thanking you!
Those that rely on technology are as lost as those who create it.
Acclaimed Contributor

Re: ninode usage on 11.0


In and of itself, a high ninode table is not a problem. 'ninode' represents the number of slots in the inode table, and thus the maximum number of open inodes that can be in memory at any given time. The table is used as a cache memory with the most recent open inodes kept in memory. Since each unique open file has an open inode associated with it, the larger the number of unique open files, the larger ninode should be.

See this document for more related information:

Honored Contributor

Re: ninode usage on 11.0

As you sort of figured out yourself, ninode is nothing more than a cache of inode information in memory. High usage does not mean a cpu problem. There are a variety of benefits and drawbacks associated with changing it though. If you could give us a brief desciption of your environment, we could be a lot more help on this.
Honored Contributor

Re: ninode usage on 11.0

Unfortunately, there is no way to determine the resusable inode entries in RAM. As mentioned, it is a cache of current *and* recently opened inodes and it's purpose is to eliminate directory searches for these files. Therefore, it will appear to be 100% full a few minutes after reboot, but this is normal. When the ninode cache is actually full (unique files) the message: "vmunix: inode: table is full" will appear on the console.

However, it is for HFS filesystems only and with 10.xx and now 11.0, VXFS is the filesystem of choice with only /stand left as (required) HFS filesystem. So ninode only needs to be large enough to accomodate all the unique files that might be opened at the same time in /stand plus NFS files and directories. It has nothing to do with CPU utilization.

Now if you have not changed SAM's formula for ninode, it is likely *way* too large (10,000 to 20,000). You can safely reduce this to a fixed value, perhaps 500 for a system with no NFS and all VXFS filesystems except /stand) to perhaps 4,000 for a busy NFS system. ninode is configured in the kernel paramters section.
Honored Contributor

Re: ninode usage on 11.0

By my experience i can say that ninode parameter must be configured with high-values when using HFS filesystems, in both 10.20 and 11.00.

if sar -v reports a full utilization of ninode table system performance is affected. This problem is clear when you run top or swapinfo and response time is too low, even minutes. In this situation cpu util in usr mode its anormaly high.

Configuring a high number of ninode solve this low performance.

DNLS ( drectory name lookup cache) reported by gpm seems to be in function of ninode.