- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Use of HFS inode cache
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2005 04:57 PM
08-28-2005 04:57 PM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2005 05:26 PM
08-28-2005 05:26 PM
Re: Use of HFS inode cache
Though I don't think there is a big payoff in lowering that innode figure, if you lower it, and the innode the system needs is not cached then disk will be read.
This is an excellent performance tuning document, though a more modern version may be hiding out there:
http://www1.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&docId=200000077186712
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2005 05:34 PM
08-28-2005 05:34 PM
Re: Use of HFS inode cache
The ninode tunable used to configure the HFS inode cache size can also impact the size of the Directory Name Lookup Cache (DNLC).
The dependency on the ninode tunable is reduced with the introduction of 2 tunables:
ncsize â Introduced with PHKL_18335 on 10.20. Determines the size of the Directory
Name Lookup Cache independent of ninode.
vx_ncsize â Introduced in 11.0. Used with ncsize to determine the overall size of the
Directory Name Lookup Cache .
While you can tune ncsize independently of ninode, the default value is still dependent on ninode and is calculated as follows:
(NINODE+VX_NCSIZE)+(8*DNLC_HASH_LOCKS)
Beginning with JFS 3.5 on 11.11, the DNLC entries for JFS files are maintained in a separate JFS DNLC.
Have look at following doc for details:
http://docs.hp.com/en/5580/Misconfigured_Resources.pdf
Sudeesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2005 12:14 AM
08-29-2005 12:14 AM
Re: Use of HFS inode cache
The formula provided by SAM templates is way too high for most systems. Even for NFS, you can leave ninode between 1000 and 4000. It does not need to be larger unless the majority of your filesystems are HFS rather than VxFS. To find the filesystem type:
mount -p | awk '{print $3"\t",$2}'
Note that sar and glance will report the cache full or nearly most of the time. This is not meaningful because the report does not indicate how many entries can be reused and there is no metric to determine this number. So leave it at a fixed number, certainly not 10000 or 40000 as the formula might specify. The extra large value wastes RAM inside the kernel.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2005 02:12 AM
08-29-2005 02:12 AM
Re: Use of HFS inode cache
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2005 03:07 AM
08-30-2005 03:07 AM
SolutionSo if you set ninode = 8000 on a typical 11.11 system, it won't get fully used (unless you create a bunch of extra vmunix files).
As a practical matter, I would set it to 1000 and forget it unless you need to tune NFS--then refer to the NFS for HP-UX book by Dave Olker.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2005 12:18 PM
08-30-2005 12:18 PM
Re: Use of HFS inode cache
sar -v shows the system-wide inode-cache.
This will include hfs and vxfs inode entries.
I believe the reason for it is , ninode parameter , which still comes into picture for both hfs and vxfs for DNLC calculation.
Thus DNLC is comman for hfs and vxfs, even though both have different inode-cache.
In JFS 3.5 , separate DNLC is maintained for vxfs files. I guess even in sar -v report, you will have system-wide inode-cache calculations.
You can check the current number of inodes in the jfs inode cache using
For JFS 3.3
adb -k /stand/vmunix /dev/mem
> vx_cur_inodes/D
For JFS 3.5
vxfsstat -v / | grep curino
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2005 04:57 PM
08-30-2005 04:57 PM
Re: Use of HFS inode cache
Bill, if you are right about the actual inode usage being greater than reported by bdf, that would explain the discrepancy. But a search did not find anything to back up your assertion. I know that large files exceed the capacity of their initial inode and must use continuation inodes but I did not think another inode was needed for every 8KB. Your formula would give rather uniform inode usage but my observation has been that much fewer inodes are needed for a small number of large files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-31-2005 12:52 AM
08-31-2005 12:52 AM
Re: Use of HFS inode cache
At the same time, the block size for an initial inode (not to be confused with LVM blocksize) is established. The default is 8K, the range is 4K through 64k. Now if the file is smaller than 8K, inode pointers will be conserved by assigning a fragment of a block (typically 1K) to these small files. Inside the inode, there are 15 pointers, the first 12 point to blocks of the file, but the 13th pointer is an indirect pointer, pointing to a special pointer-only block containing 2048 pointers (we'll assume all defaults for HFS, specifically 8K blocksize). Once this pointer block has been used for a large file, a second level indirect (the 14th slot in the inode) points to a pointer block that contains 2048 pointers to more pointer blocks (or 2048*2048 pointers). And for really big files, the 15th slot in the file's inode points to 2048 pointer blocks, each of which points to 2048 pointer blocks or 2048*2048*2048 pointers to 8K blocks. So the largest file in HFS can be FS_BLOCKSIZE * (12+2048+2048^2+2048^3). The actual maximum filesize is larger than the largest supported HFS filesystem.
Here's a really useful document from 10 years ago: http://uwsg.ucs.indiana.edu/usail/peripherals/disks/adding/HPfs.html
So technically, there are less than 100 active inodes in /stand. I believe (but don't have any docs handy) that the pointers are also loaded into the cache as each file is accessed, thus filling the cache when backing up /stand. The inode and the pointers will remain in the cache (even if the file is no longer open) forever or until a new file is opened and the entries reused. The purpose of the inode cache is to eliminate looking on the disk to discover the address of each block of a file.
Bill Hassell, sysadmin