- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Maximum Directory Entries
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:36 PM
07-01-2004 10:36 PM
Cheers, Rob.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:40 PM
07-01-2004 10:40 PM
Re: Maximum Directory Entries
I think that a directory, IF Vms has enough room to increase the size of a directory file while being contiguous, does not have a short limit.
See help/message diralloc
You have had this error message recently ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:44 PM
07-01-2004 10:44 PM
Re: Maximum Directory Entries
Is there a problem you are trying to solve?
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:46 PM
07-01-2004 10:46 PM
Re: Maximum Directory Entries
But I do have an appliction, WebReports, that writes all the PDF's to a single directory.
We've only just started rolling it out and there's already 10,000 files in there!
Obviously, for performance reasons, I'm going to have to start breaking this up, but I didn't want to be caught out before getting round to it.
Rob.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:47 PM
07-01-2004 10:47 PM
Re: Maximum Directory Entries
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:47 PM
07-01-2004 10:47 PM
Re: Maximum Directory Entries
I am sure you would have heard of these too..
Version Limits on files
/MAXIMUM_FILES in init
Since i don't see any option for this while creating a directory, i am assuming that it may not be present (CREATE?DIR)
I would love to see what other people think on these forums.
regards
Mobeen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:49 PM
07-01-2004 10:49 PM
Re: Maximum Directory Entries
Thanks for that link :-)
Looks like so far the answers to this question have been in & around whats been discussed in the link posted by you
regards
Mobeen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 10:50 PM
07-01-2004 10:50 PM
Re: Maximum Directory Entries
You may find DFU (V3.0) DIRECTORY commands useful
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 11:00 PM
07-01-2004 11:00 PM
Re: Maximum Directory Entries
But I noticed that create fails when someone does a EVE of the directory. Old sickness of VMS ... other threat ...
I'll keep you informed
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2004 11:35 PM
07-01-2004 11:35 PM
Re: Maximum Directory Entries
probable read lock on the directory file preventing update of the directory to add new entry for created file.
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 02:01 AM
07-02-2004 02:01 AM
Re: Maximum Directory Entries
There TENDS to be a performance degradation.
In older VMS versions there was a performance knee at a directory size of 128 blocks when doing wildcard lookups.
If filenames are generated 'in oder' with ever increasing names, there is very litle overhead indeed.
If the names are random, then new files will frequently cause the needed to 'shuffle' up a good chunk of the directory to make room for the new name.
Keep names short if you can!
bad: [report_directory]adobe_report_for_july_05_2004_00546.pdf
good: [report_directory]2004070500546.pdf
Divide and conquer:
better:[200407_reports]0500546.pdf
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 02:03 AM
07-02-2004 02:03 AM
Re: Maximum Directory Entries
With normal random file naming behavior, directory shuffles are infrequent. However, non-random behavior can cause problems. A classic case is a DELETE *.*;* on a big directory. The file wildcarding of course returns the files front to back, and so files are deleted from the front of the directory - precisely the worst order. So the time to delete all the files in a big directory goes with the square of the number of files. If the directory is really huge, it's well worth building a command procedure to delete the files back to front.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 02:20 AM
07-02-2004 02:20 AM
Re: Maximum Directory Entries
Ian is right:
max number of files in a directory is usually limited by finding enough contiguous space for the directory file.
You can run into "funny" behaviour if there is too litlle contiguous space to hold an expanded directory file. Especially if free space is scattered. Browse down the forum for some example....
To keep your file collections usable, I gues you wouldn't want 10.000th of files into one directory - let alone it _may_ introduce a performance penalty.
Willem
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 02:26 AM
07-02-2004 02:26 AM
Re: Maximum Directory Entries
So the time to delete all the files in a big directory goes with the square of the number of files. If the directory is really huge, it's well worth building a command procedure to delete the files back to front.
For deleting "large" directories, I find that DFU comes in handy.
Oth, to prevent any application that writes its files in one directory from "over-filling" a certain directory, you can set up a search list of logical names, that point to different physical directories, and at certain time intervals rotate the definition. Writing will happen to the first directory in the list, reading will be tried on all.
E.g.:
$ DEFINE LOG DISK2:[LOG1],DISK2:[LOG2],DISK2:[LOG3]
$ CREATE LOG:T.T
ctrl/Z
$ DIREC LOG
will show T.T in DISK2:[LOG1]
hereafter
$ DEFINE LOG DISK2:[LOG2],DISK2:[LOG3],DISK2:[LOG1]
$ CREATE LOG:X.X
ctrl/Z
$ DIREC LOG
will show T.T in DISK2:[LOG1], and X.X in DISK2:[LOG2]
We used this trick once on an application that created a huge amount of uniquely named logfiles.
Greetz,
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 03:36 AM
07-02-2004 03:36 AM
Re: Maximum Directory Entries
DCL loop to create files with name starting with 0-9 + fixed name. Started 2 jobs on each node to fill the directory.
After almost 5 hours :
First hour : 4320 files created
Second hour : 3490 files created
Third hour : 2732 files created
So : big directories are slow for file creation.
De directory file is now 7000 blocks and contains 22.000 files.
Test continues wednesday.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-02-2004 09:53 AM
07-02-2004 09:53 AM
Re: Maximum Directory Entries
In reply to my reply Robert wrote:
" That's odd...got this from Google earlier, which implies Random names are better :-
With normal random file naming behavior, directory shuffles are infrequent."
Define 'normal'!?
I was assuming (may well be wrong) that the applcation in question just kept on adding files. Deleting was not mentioned (yet). If you just keep on adding, then adding at the end is best. This will cause the directory to grow at least by a cluster and maybe more.
The 'normal' behaviour referred to is probably to have files coming and going over time maybe a few more coming than going. Then linear naming is horrible, when doing the deletes. The adds will be fine, but the deletes will always be from the beginning (first directory block) and every directory block emptied will cause a shuffle down.
A random delete will unlikely empty a block, so will not cause a shuffle. Hopefully it will create enough room for a future random add for a different file, but targetted to the same block
btw... for totally optimal directory packing I forgot to mention dropping 'obvious' file types. Like '.pdf' in a report directory. Just give the exception files an extention. And that extention can be ".F" for fortran source and ".O" for objects... if performance is more important than clarity and easy of use (unlikely!)
Ramblings...
For computer named/used files, where humanoids are not reading any meaning into the file name (more or less like VMSmail extention files), the optimal packing is ofcourse by using a base-36 (or worse!) characterset: 0-9A-Z. Using that, 2 characters can identify 1000+ files, 3 is good for almost 50,000 and 4 can address more than a million, but youhave to add 14 bytes of directory data per entry.
10,000 files woudl then just take 350 blocks of .DIR file. Crazy but possible.
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2004 03:50 AM
07-06-2004 03:50 AM
Re: Maximum Directory Entries
You haven't mentioned delete or long term storage. Are these these files out of date in after being displayed or do you use long term storage? For short term use, I use a six directory logical for our web server's reports and a batch job that redefines it in 10 minute intervals. Reports are available for at least 50 minutes, with the sixth directory having the contents deleted once an hour. For very busy sites, distribute the directories among multiple disks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-06-2004 06:59 PM
07-06-2004 06:59 PM
Re: Maximum Directory Entries
Rob.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2004 02:50 AM
07-07-2004 02:50 AM
Re: Maximum Directory Entries
There are now about 35.000 files in my directory that itself is now 11.000 blocks.
No problems yet but :
1) doing dir/siz test.dir takes between 2 and 10 seconds (why ?)
2) doing dir/tot/sin=15:30 takes about 10 minutes (acceptable, old machine)
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2004 09:33 PM
07-07-2004 09:33 PM
Re: Maximum Directory Entries
60.000 files in directory file of 19.000 blocks.
File creation with names starting with abc takes 0.1 second but every 5 files it takes 10 seconds. And the directory file is locked during that interval.
Jobs doing the dcl create in a loop pagefault at a rate of 75 per second (initial working set of 5800 pages, used about 1700).
Files starting with z have the same results.
Nice feature for a real time application ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2004 10:03 PM
07-07-2004 10:03 PM
Re: Maximum Directory Entries
I think your observations can easily be explained:
1) doing dir/siz test.dir takes between 2 and 10 seconds
2) doing dir/tot/sin=15:30 takes about 10 minutes (acceptable, old machine)
I guess this has to do with caching.
The directory-file may need to be reread several times. Don't forget that access to indexf.sys is involved as well - for getting the size.
File creation with names starting with abc takes 0.1 second but every 5 files it takes 10 seconds. And the directory file is locked during that interval.
Jobs doing the dcl create in a loop pagefault at a rate of 75 per second (initial working set of 5800 pages, used about 1700).
Quite possible that each 5th file will need an extension of the directory file - and new (contiguous!) space has to be searched and allocated for the directory! And again, this requirs quite some extra IO.
It makes sense to do some redesign if timing gets critical.
Willem
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2004 11:21 PM
07-07-2004 11:21 PM
Re: Maximum Directory Entries
Now I have 1 job active doing the creates and the directory file is about 20.000 blocks. The file names start with Z and are about 60 characters long.
dir/siz=all takes 15 seconds when the directory is expanded (108 blocks). These 108 blocks don't explain the delay of 15 seconds. There must be some reshuffling that takes time.
Files creation time is 0.5 - 2 seconds (why ?), thus longer than during the previous trial. The 15 seconds now only happen every few hundred files !
The pagefaults have increased to 700 per second !!!
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-08-2004 03:25 AM
07-08-2004 03:25 AM
Re: Maximum Directory Entries
blocking time : 20 seconds when allocating 108 blocks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-08-2004 06:55 PM
07-08-2004 06:55 PM
Re: Maximum Directory Entries
Pagefaults down to 200/sec.
Started a create starting with mmm.
20 seconds blockage every 5 files.
Started a create starting with a123.
22 seconds blockage every 5 files. Sometimes a file creation takes 0.1 seconds, sometimes 3 seconds.