Operating System - HP-UX
1829596 Members
2063 Online
109992 Solutions
New Discussion

max number of files in a directory

 
Troy Nightingale
Occasional Advisor

max number of files in a directory

I have a directory that has logs and pdf files saved by an application. The applications folks want to keep at least 30+ days of these files.

Is there a maximum number of files that a directory can hold within 11i? Is there a "working" maximum value that should be used?

I know there used to be a performance issue if you had >32000 files in 10.20. What is that number in 11i? or 11.23?

Thanks,

Troy
6 REPLIES 6
Kent Ostby
Honored Contributor

Re: max number of files in a directory

I've dug around and I haven't found a lot.

I did find an old ITRC post stating that there was 20 to 30% in filesystem performance going from 10.20 to 11.00 -- I'm not sure how much of that is related to file structure manipulation improvement.
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
DCE
Honored Contributor

Re: max number of files in a directory


Troy,
there is no limit on the number of files in a directory that I am aware of.

However, as you noted, there can be performance issues when you get a larger number of files in a directory. Lists, sorts,backups, etc can and do take longer to complete. I personnally have seen 100,00+ files in a directory (all under 1K in size) The app owner would not remove/archive the older files even though the backup took over 12 hours!

If you can, archive the older files out on a regular basis (backup to tape and remove from disk, or gzip older files, etc), in order to keep the number of files to a resonable level.
A. Clay Stephenson
Acclaimed Contributor

Re: max number of files in a directory

There is no limit to the number of files in a directory and there is really is no magic number that, if exceeded, is bad. There are two major things at play which impact performance. 1) Directory searches are linear searchs so that on average n/2 searches are needed to locate a specific entry 2) When large directories must be updated there can be significant delays caused by the need to freeze activity so that the integrity of the directory is maintained. This really means that a divide and conquer approach is the preferred method so that no one directory ever grows so large as to cause a major bottleneck.

In most cases whenever I see a directory grow beyond a few thousand entries (and even that is marginal) I begin to think there must be a better way. Really horrendus cases try to substitute a filesystem and directory tree for a database.
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: max number of files in a directory

As mentioned, there is no limit (millions, billions, etc). But trust me, you do NOT want to be the system manager for a system like this. Cleanup will be an absolute nightmare, requiring HOURS of time to find all the files that are older than N days. And if you use archaic backup tools like tar or cpio, each file must be opened, read, and closed, creating massive overhead to get a trivial amount of data to tape.

You can avert most of the issues by simply specifying that these massive directories must be stored on solid state disks (ie, massive RAM boxes that look like a disk). Just calculate the amount of storage needed and present the bill to the applications manager. This usually solves the problem quite rapidly as the app folks are directed to find another way (other than a simple Unix directory) store the logs and PDF files.


Bill Hassell, sysadmin
Ninad_1
Honored Contributor

Re: max number of files in a directory

Troy,

The max no of files you can have in a directory depends on the max inodes that the filesystem can have in which your desired directory resides.
The max inodes (which in turn is your max no of files in the filesystem) is determined when you create a filesyetm using mkfs/newfs. This is determined by
max inodes = filesystem size/block size of filesystem.
You can use fstyp to check the parameters of your filesystem
e.g. fstyp -v /dev/vg_migr_01/lvol_uktech
(if you are using LVM)
or
fstyp -v /dev/dsk/c1t0d0
e.g. output
fstyp -v /dev/vg_migr_01/lvol_uktech
vxfs
version: 3
f_bsize: 8192
f_frsize: 8192
f_blocks: 128000
f_bfree: 96163
f_bavail: 95412
f_files: 24832
f_ffree: 24032
f_favail: 24032
f_fsid: 1074790402
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 6
f_size: 128000

From the output you can see the block size is f_bsize = 8192 bytes
Max no of files possible = f_files = 24832
No of files that can be created more = f_favail = 24032 - this means I can still create 24302 files in the filesystem.

Hope this helps,
Ninad
Bill Hassell
Honored Contributor

Re: max number of files in a directory

And just to clarify, the number of inodes is fixed on HFS filesystems and can be seen with the bdf -i command. However, for VxFS filesystems (the default filesystem for 10.xx and 11.xx versions of HP-UX), inodes are created automatically as needed so there is no practical limit except disk space.


Bill Hassell, sysadmin