1837967 Members
3811 Online
110124 Solutions
New Discussion

Re: inode question

 
SOLVED
Go to solution
Mark Harshman_1
Regular Advisor

inode question

I have a filesystem approx. 260 GIG in size with over 300,000 files. The inode count on this is over 700,000. we've been having some occasional system performance issues, and i dont have a good understanding of how inode contention in a filesystem this large may or may not be effected. This is running on a HPUX 11i server. Any general comments or specific regarding inodes are welcome..thanks
Never underestimate the power of stupid people in large groups
8 REPLIES 8
RAC_1
Honored Contributor
Solution

Re: inode question

That no. of files is certainly a very high count. Do all those file are lying in one dir??? When you experience performance issues, what exactly is a bottleneck???

When you are haing performance issues, it would be nice to collect some statastics.

sar -a 5 10
sar -v 5 10

Anil
There is no substitute to HARDWORK
G. Vrijhoeven
Honored Contributor

Re: inode question

Hi Mark,

This "could" create performance issues.
Large amounts of files in a dir can create slow lisings ( you could check the inode cache kernel parameter)
deep paths also can slow down performance.

Do you have performance problems on the filesystem?

Gideon
Prashant Zanwar_4
Respected Contributor

Re: inode question

Hi,

You will notice the problems in general too. When you do ls -altr or some search..
These many number of files, and used inodes is surely a problem for file system performance.
What practise we used to have was have a backup directory..Move the files of so many days old to that directory. And then run archive_to_tape script to archive this backup..which also used to delete contents of these. You may plan something like this.

Thanks
Prashant
"Intellect distinguishes between the possible and the impossible; reason distinguishes between the sensible and the senseless. Even the possible can be senseless."
Mark Harshman_1
Regular Advisor

Re: inode question

the ninode kernel parameter is set to 12200. I've checked sar for the times of IO issues, and have not seen any inode issues. We are utilizing only about 1/3 of the avaiable space in the inode table. I also have not seen any unusual disk I/O. I dont even know that this is a server issue, just trying to appease those who are unhappy. We also noticed some impact backing up this directory, which tends to be a long process due to the size. This is just a report directory, but our application (JDE) is continuously dumping new files into this filesystem.
Never underestimate the power of stupid people in large groups
Mark Greene_1
Honored Contributor

Re: inode question

inodes are the unique, numeric identifier that the OS uses to identify each directory and file. If you have a large number of files in one directory *plus* a high rate of i/o among the files--i.e., you are writing to a large number of files all in one directory in a short period of time--you can run into what's called inode contention.

If you run, as root, fuser on a directory that you know has I/O activity on the files within, you'll see what I mean. /dev is a typical example. fuser /dev will return a list of PIDs for all processes that have a lock on that directory.

What I don't know (and was not determinable via a quick search on google groups) if that a "deadly embrace" situation is still possible, whereby process 100 locks directory "A" and process 200 locks directory "B", and then each process gets to a point where it needs to lock the other direcory, but cannot obtain a lock and so they both wait for each other. But because they are both waiting, nothing happens.

I couldn't find anything that indicated if the number of locks that can be placed on a directory or file is finite. I would assume it would have to be but with a 64-bit system one would think that the upper threshold is so high that it is effectively limitless.
the future will be a lot like now, only later
RAC_1
Honored Contributor

Re: inode question

How big are the files??? The backups will be slow.. Can you distribute the files in different dirs??? so that backups will be bit happy.

Anil
There is no substitute to HARDWORK
A. Clay Stephenson
Acclaimed Contributor

Re: inode question

Ninode only applies to hfs filesystems not vxfs filesystems which are dynamically allocated as needed. There is nothing wrong with a large number of files within a filesystem as long as the number of files within any single directory is not too large.
The other mistake that is commonly made is to use a filesystem as a database; ie, using the directory trees as a substitute for a true database. Be aware that directory searches are linear searches so that the system on average has to search through n/2 directory entries to find the desired file.

300,000 files is a 280GB filesystem is not considered to be a large number of files as long as a reasonable directory tree is in place. If you have 300,000 files in a handfull of directories then that is a bad thing.
If it ain't broke, I can fix that.
Mark Harshman_1
Regular Advisor

Re: inode question

thanks for the info
Never underestimate the power of stupid people in large groups