1832684 Members
2923 Online
110043 Solutions
New Discussion

Inodes !!!!!!

 
SOLVED
Go to solution
KapilRaj
Honored Contributor

Inodes !!!!!!

In a filesystem (Online JFS), I had 20,000 files and later some of the files got housekept (deleted). Would df -i still show 20000 inoded or would it reduce the number of inodes allocated ?.

Kaps
Nothing is impossible
8 REPLIES 8
Robert-Jan Goossens
Honored Contributor

Re: Inodes !!!!!!

HI Kaps,

In my opinion it should reduce.
Just did a test on a 11.0 and removes 50 logfiles.

bdf -i /home
/dev/vg01/lvol10 4374528 4067213 296998 93% 13850 76826 15% /home

bdf -i /home
/dev/vg01/lvol10 4374528 4067161 297060 93% 13798 76838 15% /home

Hope it helps,
Robert-Jan
Steven E. Protter
Exalted Contributor

Re: Inodes !!!!!!

innodes are created dynamically when they are needed. They might not get deallocated so fast.

innode useage is not a big deal because the OS will create as many as it needs. There is no limit to how many innodes can be created.

Answer 1: don't worry about it.
Answer 2: reboot the system and see if an adjustment is made.
Answer 3: backup up the data on the filesystem, newfs -F vxfs /vg00/rlvol1 the fs and recreate it.
Answer 4: If you have online JFS fsadm the fs and defragment it and see it that helps.

Have a nice weekend.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
KapilRaj
Honored Contributor

Re: Inodes !!!!!!

OK....

Thanks for that anybody else hv a diff openion ?. I am pissed off transfering 13,69,374 filesfrom an fs to a diff filesystem and I got confusd as the destination fileystems still shows a lesser percentage of inodes used !!!

Kaps
Nothing is impossible
Bruno Ganino
Honored Contributor

Re: Inodes !!!!!!

#stat -i
it should be give the number of inode.
Bruno
Torino (Turin) +2H
KapilRaj
Honored Contributor

Re: Inodes !!!!!!

Steven,

I am sorry ,can not agree to the point that inodes are allocated dynamically and have no limit !. It has a limit even for a JFS. It is the "Number Of Bytes Per Inode" which decides it.

Theory says inodes are allocated on demand and hv no limit .. but reality , NO!

Let's wait for other's comments as well.

Thanks for your suggession ... (Can't re-boot this bloody box as it affects business.. Can u hear a sysadmin crying !!!)


Kaps........
Nothing is impossible
Bill Hassell
Honored Contributor
Solution

Re: Inodes !!!!!!

When you copy files from one filesystem to anither you will almost *ALWAYS* have a different amount of space occupied as well as a different number of inodes used. The reasons are twofold: directories and sparse files. Let's start with directories:

- Directories are just special files that hold information about filenames and inode numbers. When a directory is first created, the size of the directory is 96 bytes. After creating a bunch of files inside the directory, it will grow to accomodate the additional entries. However, if you create 10,000 files in the directory, then remove all but 1 file, the directory will still be several kilobytes in size. The reason is that there are now a bunch of empty slots in the directory but the overhead needed to compress the directory after each file is removed would be enormous, so the directory is left as is with lots of empty slots waiting to be reused. Now if you copy this directory using cp -r or use tar or cpio or any other backup program to copy the directory to a new location, the directory will be created as 96 bytes and the one file fits nicely in this new directory. But the occupied space shown by du or bdf will be different between the original (which is bigger) and the copy (which is smaller). The result is perfectly OK though.

- Sparse files: This is a file that is created by using lseek to write a new record, then skip a million records and write another record at position 1,000,000. The resultant file contains 2 valid records and 999,998 records full of nulls. On the original system, the space will show up in wc and ls -l but the undefined records are not stored nor counted in bdf or du. Depending on the size of the file and the spareseness, the difference in apparent versus actual size may be VERY large.

Create your own sparse file with:

dd if=/etc/issue of=/var/tmp/sparse bs=2048k seek=1

where you will see the original file is just a few dozen bytes, the result with ls -l or wc -c shows a 2 meg file, but du will show the file as occupying just a bit more than the original /etc/issue file. A cp of the file will create a new file that is the same size (using ls -l or wc -c) but du will now show a MUCH larger size than the original file and it will use more inodes. However, both the original and the copy will diff exactly the same and programs cannot tell any difference between the two files.

So in summary, you can't use bdf or du (or df) to verify a directory copy. Instead, use find to count the files and the directories and if necessary, use ls -l to find the size of both source and destination files and compare those numbers.


Bill Hassell, sysadmin
Bill Hassell
Honored Contributor

Re: Inodes !!!!!!

Almost forgot: number_of_bytes_per_inode is only meaningful for HFS filesystems where the number of inodes is fixed at creation (newfs or mkfs) and cannot be increased. JFS uses a very different method to allocate inodes and unfortunately, bdf (and df) are just guessing when they report % i-nodes used. The VxFS filesystem does indeed create inodes on the fly and is limited only by the disk space. That's why the man pages for newfs_hfs and mkfs_hfs are different from newfs_vxfs and mkfs_vxfs. From the man page for mkfs_vxfs:

"Inode allocation is done dynamically. There are a minimum number of inodes allocated to the file system by mkfs, and any other inode allocations are done on an as-needed basis during file system use."


Bill Hassell, sysadmin
KapilRaj
Honored Contributor

Re: Inodes !!!!!!

Thanks and well apreciated !. Your comments make this thread worth reading !!

Kaps
Nothing is impossible