- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Max files in a directory
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2009 05:52 AM
тАО07-17-2009 05:52 AM
Has anyone heard of a HP recommendation of having no more than 1000 files in a directory?
Thanks,
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2009 06:24 AM
тАО07-17-2009 06:24 AM
SolutionThere HAS been a recommendation around the aggregate directory sizes; around the (old!) 128 block threshold in the directory caching for (old!) releases. Beyond that size (and on ancient OpenVMS releases) the performance of various directory operations suffered.
The directory size is based not on the NUMBER of entries but on the numbers of file versions and the lengths of the filenames; on the aggregate sizes of the filenames, and it is this detail that means there isn't a "number of files" recommendation.
There are various recommendation around "not being stupid" around how an application uses and partitions and sorts and maintains its directories, and around upgrading to OpenVMS V7.2 or later. Or far better, to the current release.
But no. No specific numbers of files AFAIK.
So who is passing out this particular fiction?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2009 06:56 AM
тАО07-17-2009 06:56 AM
Re: Max files in a directory
Thanks Hoff!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-17-2009 06:57 AM
тАО07-17-2009 06:57 AM
Re: Max files in a directory
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2009 02:38 AM
тАО07-21-2009 02:38 AM
Re: Max files in a directory
just states the right idea; not a limit. Though you may well user a much higher number of files, it's my (and other's) experience some good thinking on the number of files pays off :) (Think about searching files, backup et al)
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-21-2009 02:01 PM
тАО07-21-2009 02:01 PM
Re: Max files in a directory
How large a directory "should" become depends on how it is used. Remember, under the covers, the OpenVMS file system is really a single flat file - INDEXF.SYS, pointing to the individual files. Directories really only exist for humans to categorise files in a sensible way. From that perspective, once a directory gets bigger than one or two screens full of data, it becomes difficult to deal with.
If no one ever does a directory listing, it really doesn't matter how many files are in it.
Although the lookup mechanism is a kind of hash, there is some lookup performance sensitivity to the number of files. For application based use, this doesn't matter much. For example MAIL can easily manage huge numbers of files, because it largely bypasses the directory, accessing files directly by FID.
There are some implementation quirks of the OpenVMS directory mechanism. The worst is that the DELETE command deletes files in descending alphabetical order, which is pathologically the worst possible sequence for large directories. Simple example, create a large number, say 10000, files in a directory and issue a DELETE *.*;* command. Come back tomorrow and it will probably still be executing. If you were to replace the wildcard DELETE with 10000 individual DELETE commands with the file names in reverse alphabetical order, the operation would complete MUCH faster, even though you have 4 orders of magnitude more image activations. This is just an unfortunate feature of the implementation. It's unlikely to ever change.
This type of behaviour depends on the size of the directory file, which is related to the number of files, the length of the filenames and the numbers of versions.
If you want a VERY rough rule of thumb, I start to get concerned when a directory file exceeds 1000 BLOCKS, and there is significant turnover of files in the directory, because some common operations (like DELETE) are likely to perform unacceptably slow. If it's read only, size doesn't really matter.
(over the years in the Customer Support Centre I used to deal with directory size issues fairly regularly, and would quote 1000 blocks as a simple threshold. Perhaps your "1000 files" is a corruption of that message?)