Operating System - OpenVMS
Showing results for 
Search instead for 
Did you mean: 

Max files in a directory

Go to solution

Max files in a directory

Hi all,

Has anyone heard of a HP recommendation of having no more than 1000 files in a directory?

Honored Contributor

Re: Max files in a directory

This isn't an official HP support forum, but AFAIK, there has never been such a recommendation made from DEC, Compaq or HP.

There HAS been a recommendation around the aggregate directory sizes; around the (old!) 128 block threshold in the directory caching for (old!) releases. Beyond that size (and on ancient OpenVMS releases) the performance of various directory operations suffered.

The directory size is based not on the NUMBER of entries but on the numbers of file versions and the lengths of the filenames; on the aggregate sizes of the filenames, and it is this detail that means there isn't a "number of files" recommendation.

There are various recommendation around "not being stupid" around how an application uses and partitions and sorts and maintains its directories, and around upgrading to OpenVMS V7.2 or later. Or far better, to the current release.

But no. No specific numbers of files AFAIK.

So who is passing out this particular fiction?

Re: Max files in a directory

It came from a person that was a System Admin for an unnamed company as a best practice. I have been using VMS for over 20 years and had never heard of this, so just checking to see if my old brain was leaking.

Thanks Hoff!!

Re: Max files in a directory

Thanks to Hoff for his insight to this question.

Willem Grooters
Honored Contributor

Re: Max files in a directory

...as a best practice...

just states the right idea; not a limit. Though you may well user a much higher number of files, it's my (and other's) experience some good thinking on the number of files pays off :) (Think about searching files, backup et al)
Willem Grooters
OpenVMS Developer & System Manager
John Gillings
Honored Contributor

Re: Max files in a directory


How large a directory "should" become depends on how it is used. Remember, under the covers, the OpenVMS file system is really a single flat file - INDEXF.SYS, pointing to the individual files. Directories really only exist for humans to categorise files in a sensible way. From that perspective, once a directory gets bigger than one or two screens full of data, it becomes difficult to deal with.

If no one ever does a directory listing, it really doesn't matter how many files are in it.

Although the lookup mechanism is a kind of hash, there is some lookup performance sensitivity to the number of files. For application based use, this doesn't matter much. For example MAIL can easily manage huge numbers of files, because it largely bypasses the directory, accessing files directly by FID.

There are some implementation quirks of the OpenVMS directory mechanism. The worst is that the DELETE command deletes files in descending alphabetical order, which is pathologically the worst possible sequence for large directories. Simple example, create a large number, say 10000, files in a directory and issue a DELETE *.*;* command. Come back tomorrow and it will probably still be executing. If you were to replace the wildcard DELETE with 10000 individual DELETE commands with the file names in reverse alphabetical order, the operation would complete MUCH faster, even though you have 4 orders of magnitude more image activations. This is just an unfortunate feature of the implementation. It's unlikely to ever change.

This type of behaviour depends on the size of the directory file, which is related to the number of files, the length of the filenames and the numbers of versions.

If you want a VERY rough rule of thumb, I start to get concerned when a directory file exceeds 1000 BLOCKS, and there is significant turnover of files in the directory, because some common operations (like DELETE) are likely to perform unacceptably slow. If it's read only, size doesn't really matter.

(over the years in the Customer Support Centre I used to deal with directory size issues fairly regularly, and would quote 1000 blocks as a simple threshold. Perhaps your "1000 files" is a corruption of that message?)
A crucible of informative mistakes