1847840 Members
5797 Online
104021 Solutions
New Discussion

bdf -i

 
SOLVED
Go to solution
derek b smith_1
Regular Advisor

bdf -i

my output of bdf -i is makeing me worry:

svoieprd1:{root}>:/usr/local/bin/scripts>bdf -i
Filesystem kbytes used avail %used iused ifree %iuse Mounted on
/dev/vg00/lvol3 204800 175664 29040 86% 13586 910 94% /
/dev/vg00/lvol1 298928 60864 208168 23% 67 32701 0% /stand
/dev/vg00/lvol8 3170304 2387080 778848 75% 33781 24459 58% /var
/dev/vg00/lvol7 3145728 1829072 1306392 58% 42458 41126 51% /usr
/dev/vg00/lvol6 409600 50752 356232 12% 337 11183 3% /tmp
/dev/vg00/lvol5 4194304 2460192 1721416 59% 52433 54191 49% /opt
/dev/vg00/lvol4 512000 59608 449072 12% 2271 14113 14% /home
/dev/vg02/lvhome1 512000 11470 469401 2% 1211 125129 1% /home1
/dev/vg02/lvhci 16777216 8036771 8218158 49% 80166 2185110 4% /hci

as you can see / is at 94% for %iuse
my vx_ninode is set to 30,000
what could be the problem and how do I need to bump up ninode from 1000?

thx...
derek
9 REPLIES 9
vinod_25
Valued Contributor

Re: bdf -i

hi derek

have you tried sar -v 5 5
do u get something like this

09:14:29 text-sz ov proc-sz ov inod-sz ov file-sz ov
09:14:34 N/A N/A 345/1500 0 10000/10000 0 4014/28010 0
09:14:39 N/A N/A 343/1500 0 10000/10000 0 4012/28010 0
09:14:44 N/A N/A 343/1500 0 10000/10000 0 4012/28010 0
09:14:49 N/A N/A 343/1500 0 10000/10000 0 4013/28010 0
09:14:54 N/A N/A 343/1500 0 10000/10000 0 4015/28010 0

It is not uncommon to see what appears to be a 'maxed out' value for inode-sz column. The number to the left of the / is the number of
inodes open in the inode table cache and the number on the right is the maximum number of inodes that can be open in the inode table cache determined from the value of ninode in the running kernel.

The HP-UX OS actually tries to keep this value at the maximum for performance reasons. As more inodes are cached, the inode retrievals (on average) will be faster. Seeing this value in inode-sz to be equal to your ninode value is not something to be overly concerned about. The system will maintain the cache and add/delete inode entries as needed.

This is unlike the proc-sz and file-sz columns from the sar output which show hard limits. When these limits are reached, new processes cannot not be started or additional files cannot be opened. The inode-sz column refers to a cached table, and it is expected that having this value â maxed outâ should not prevent users on the system from extracting inode information from inodes not available in the cache. That being said, tuning ninode to be a smaller or larger value to allow for a smaller or larger inode cache table can have a neglibile effect on performance in some environments.

regards

Vinod K
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: bdf -i

I doubt you have a real problem. Unless you specifically told it to create a filesystem with a fixed number of inodes (not the default), the inodes are allocated dynamically. Run this command for each filesystem in question.

mkfs -F vxfs -m /dev/vg02/rlvhome1

The -m will simply display the arguments used to create the filesystem but make sure that you specify -m of you will create a new filesystem. You can do this with the filesystem mounted. Man mkfs_vxfs for details.
If it ain't broke, I can fix that.
Bharat Katkar
Honored Contributor

Re: bdf -i

Hi derek,
as clay pointed out inodes are assigned dynamically and my output for root say something like this:
mkfs -F vxfs -o ninode=unlimited,bsize=8192,version=5,inosize=256,logsize=2048,nolargefiles /dev/vg00/lvol3 5218304

and bdf -i shows:
/dev/vg00/lvol3 5218304 262248 4917392 5% 3929 154855 2% /

Hope that helps.
Regards,
You need to know a lot to actually know how little you know
Bharat Katkar
Honored Contributor

Re: bdf -i

And the value of ninode and vx_inode are:

ninode 4880 Default
vx_ninode 0 Default

Regards,
You need to know a lot to actually know how little you know
Florian Heigl (new acc)
Honored Contributor

Re: bdf -i

You shouldn't have a problem, vxfs usually is able to add inodes tables on the fly, as long as there is space in the filesystem.(when it's full You won't need extra inodes anyhow: )

the unlimited inode thing is version dependent, but as long as You're at least running vxfs version 3, everything should be fine.
yesterday I stood at the edge. Today I'm one step ahead.
derek b smith_1
Regular Advisor

Re: bdf -i

ok so when lvol3 or / hits 100% from bdf -i the system will not panic? I do use sar -v a lot but was not aware of the mkfs -m option. I deducted that the output from mkfs -m proves that I am ok?

again my ninode is 1000 and my vx_ninode is 30000

thanks
A. Clay Stephenson
Acclaimed Contributor

Re: bdf -i

I think you are fine. I am reserving judgment until you specifically state that / is a vxfs filesystem and that you saw "unlimited" when you ran mkfs -F vxfs -m.

If your only hfs filesystem is /stand then ninode = 1000 is fine.
If it ain't broke, I can fix that.
Florian Heigl (new acc)
Honored Contributor

Re: bdf -i

vxninode is a kernel inode cache, if I remember correctly, and ninode is the number of inodes a process can open.

standard values hp defined for our env:
vxninode 30000
ninode (8*NPROC+2048) # yes, You should really consider raising it. I don't know if You can kmtune it at runtime. Try it, if it doesn't work, You'll have to regen a kernel.

I can't tell if HP-UX will panic when / has no more inodes available - but I can tell it won't panic if / is full (space usage) otherwise, at least in my experience.
yesterday I stood at the edge. Today I'm one step ahead.
derek b smith_1
Regular Advisor

Re: bdf -i

yes / is indeed a vxfs filesystem

svoieprd1:{root}>:/>mkfs -F vxfs -m /dev/vg00/lvol3
mkfs -F vxfs -o ninode=unlimited,bsize=8192,version=4,inosize=256,logsize=256,nolargefiles /dev/vg00/lvol3 204800

I have run ninode at 476 before with vx_ninode at 30000 and never had any issues, but bdf -i never reported 94% for iuse. I think 100 is fine