Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

VAX/VMS V7.1 Large Directory Files

 
oldskul
Advisor

VAX/VMS V7.1 Large Directory Files

We have several directories over 128 block on our system. I know the "128 knee" was lifted in V7.2 however can someone explain how it worked (or did not work well) in V7.1? I've searched and it was so long ago I can't find an explanation to present to make things better. Thank You.
19 REPLIES 19
Andy Bustamante
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

The standard VMS answer applies here, "It depends."

OpenVMS maintains direcotry entries sorted by name. Depending on your rate of add/delete/rename, the total amount of I/O to the disk, the appliciation and user expectations this may or may not be an issue. The operating system will continue to function with directory entries where the DIRECTORY.DIR file exceeds 128 blocks, but there may be response time impact. In extreme cases, the application may time out. The cache for directory entries was updated in OpenVMS 7.2 removing the 128 block limit cache limit. A determined developer can still degrade performance in later releases by creating large enough directories.

Some notes:
http://hoffmanlabs.org/vmsfaq/vmsfaq_013.html

http://h71000.www7.hp.com/wizard/wiz_8343.html

Is there any reason you can't upgrade to a more current version of OpenVMS? OpenVMS is very much upward compatible, consider moving to 7.3-2 at the least. Does your application support the use of a logical to create floating paths for storage? This can allow you to split up these files without impacting functionality.


If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
oldskul
Advisor

Re: VAX/VMS V7.1 Large Directory Files

Everything I read says "128 blocks" is a limitation because of the way VMS works (in V7.1) and it was lifted in V7.2 and it doesn't answer the question. How does it work in V7.1 or better yet, why does it work better in V7.2? I need to be overdosed with the internals/details.

As for this:
<<<<>>>>

Upgrade: No one has been able to convince the the man holding the check book how much better it would be in such a way that a check gets written. I don't have access to the person with the check book or we'd already be on Integrity V8.4 . I was offered the upgrade project, accepted it but it just hasn't happened. The last I heard there was a CHARON VAX meeting yesterday.

Floating Paths: There's an idea I can present to development if I can explain to them why large dirs are so bad. I can't offer it up without telling them why. It's not broken, ergo, it doesn't need to change.
Richard Jordan
Regular Advisor

Re: VAX/VMS V7.1 Large Directory Files

> I need to be overdosed with the internals/details.

Sorry, don't have. Just an overview that I remember but don't have documentation for.

In V7.1 there were two caches involved in directory handling; the normal RMS directory cache and a directory block cache maintained by (the XQP? if I remember correctly). If a directory's allocated space exceeded 127 blocks then RMS caching of it was disabled. That meant for larger directories every I/O to that directory file had to go to disk (or the XQP cache, which I presume is less useful) since there was no RMS cache to read from.

V7.2 at the least removed the 127 block limit on RMS caching of directory files. I don't know if there's a new/larger limit.

Sorry but that's all I can recall at the moment. All my version specific docs of the time are still packed up.
John Gillings
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

Janet,

As Richard has explained the "knee" was the point at which RMS caching stopped. So, if you had a large, and very busy directory with many inserts and deletes, you may have seen an improvement in create and delete performance post V7.2. However, other than rather extreme cases, I don't believe many folk would notice.

Where large directories are really noticed is the timing of "DELETE *.*;*", which on even a modestly large directory can literally take DAYS to complete. This has NOT been "fixed" by V7.2 or any later versions, and probably never will be.

The reason is that directory scanning in OpenVMS is optimally bad for deletion. You cannot do DELETE [...]*.*;* because the directory tree is scanned breadth first, rather than depth first (so you attempt to delete the directory file itself before the files inside it), and the file name scan is A..Z, which is the worst possible order for deletion.

Why? Directories are organised in buckets, containing some number of entries, maintained in A..Z order. Inserting a new entry may involve splitting an existing bucket, then cascading all the subsequent buckets to the end of the file - real disk I/O!. So, if you add a new entry at the top of the directory when there is no more room in the top bucket, you effectively copy the entire directory to shift everything down to make room. If the directory file is full, you need to extend it, and because they're required to be contiguous, that can involve significant work to find a large enough chunk of disk.

But to really see poor performance, consider the DELETE command. You start by deleting the files at the beginning of the file, until you empty the first bucket. Since empty buckets aren't allowed, you then copy the 2nd to the first, and cascade to the end of the file. Now you proceed to empty the 2nd bucket (which is now the first), and repeat copying the entire directory every dozen or so files. That's why it takes so long.

Note that these behaviours are true of ALL versions of OpenVMS right up to the present. There are several workarounds. It's actually MUCH quicker to generate a list of all files in the directory:

$ DIR/NOHEAD/NOTRAIL/OUT=DELLIST.COM [MYDIR]

sort it into inverse order:

$ sort/key=(pos:1,size:'F$FILE("dellist.com","LRL"),desc) dellist.com dellist.com

then edit the file to add "$DELETE " to the beginning of each line. Even though you wear the cost of an image activation for each file, this can be orders of magnitude faster than a forward DELETE.

Another option is to use DFU, which implements DELETE/DIRECTORY and DELETE/TREE correctly.

Note we've been promised a DCL DELETE/TREE for a few decades now. Maybe it's in V8.4?

In defense of OpenVMS directories, at the time they were designed, no one could afford sufficient disk space to create directories big enough to be a problem (disks the size of washing machines, costing $10K a pop holding all of 50MB).
A crucible of informative mistakes
Hein van den Heuvel
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

> I need to be overdosed with the internals/details.

I like this person! Way to go.

There were actually 2 independend changes, and the RMS one was probably the smaller one.

The RMS change only applied to wild-card lookup. That happens a bunch for deverlopers and maintenance scripts, but not so much during production time perhaps.

For basic file lookups the XQP does all the work. RMS is not involved until the file is really opened. The xqp has a little 1 pagelette index into the directory which helps some applications a lot, other a little based on acces and name patterns.

The bigger change, IMHO, was that the directory shuffle algoritme was changed from1 block at a time to sysgem param ACP_MAX_READ blocks at a time. The default of 32 blocks for that gave a magnitude improvement!

I'm with John when he writes "However, other than rather extreme cases, I don't believe many folk would notice."

With that, I am kinda negative on your "There's an idea I can present to development if I can explain to them why large dirs are so bad. I can't offer it up without telling them why. It's not broken, ergo, it doesn't need to change."

Large directories are bad. They annoy the bejezus out of me. But will anyone be able to notice a difference when fixed? Only in extreme cases.
- for example with the wild-card lookup in active play during production.
- or when large directory entries are added in order and not at the end... causing a shuffle for all higher block every 10 files or so.
- but NOT when just creating a new version and purging down to 10 versions.

So you'll need to TRY to measure the overhead. Take a look at the monitor (T4) data for FCP focussing on lookup rate and FCP tick rate. If the FCP does not collect much CPU, then there is little to help is there. SImilar for the open rate. If greater than 100 /sec then you may have a winner... but wouldn't it be better to reduce the opens instead of making them more efficient?
monitor file/t4 has "dir data" to look at to get an impression.

On the last bootcamp (2010, Nashua) Greg Jordan (now at Factset) presented "OpenVMS file system FADs". That has the gory detaisl you are looking for. Beyond directory structures it talks about the impact of the file system serialization locks and nasty stuff like that. And he created and ANALYZE/SYSTEM extension FAD$SDA to help drill down on potential issue.
He is preparing that software for public availability.
(He promissed me an Email, but I haven't seen one for now :-).

Good luck, and uh... thanks for asking!
Hein
Hoff
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

VAX? V7.1? If that's where you are, that's unlikely to ever see an upgrade. That's off support. No patches. It's also likely approaching hardware failures, too. That's a "run it into the ground" configuration.

I wouldn't bother learning about the innards of VMS here, either. That's not going to be helpful for what are the likely corporate goals. (Not spending money on this stuff being one of them.)

Either get going on the move to an emulator as a stop-gap for the hardware and power, or get going on porting your applications and tasks off of VMS, or both. Spend your thoughts there, as that's (far more likely) aligned with what your management wants to happen here.

Learn about whatever platform(s) your organization is going, and porting your code, etc. That might be an Integrity upgrade (those are cheap, if you buy used), or your organization may have decided to migrate wholesale to another OS platform.

If you're curious about VMS innards on your own time, get yourself a copy of the VMS Internals and Data Structures Manual for Alpha or VAX; whatever edition you can find on the used-books market at a reasonable price. (The user- and kernel-mode interfaces VAX/VMS and related internals have been frozen since V6.0, too, which means most any VAX IDSM edition will be pretty close to your V7.1 or to the end-of-the-line V7.3 release.)
abrsvc
Respected Contributor

Re: VAX/VMS V7.1 Large Directory Files

I would have to disagree somewhat with this portion of John's statement:

"or get going on porting your applications and tasks off of VMS"

One other viable option is to investigate the cost/effort of porting to either the Alpha or Itanium platforms but remaining with VMS. WIth teh high availability of Alpha machines at low cost, that may be a quick and relatively low cost option.

Dan
Hoff
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

That was me. John has the Aussie accent.

As was mentioned "That might be an Integrity upgrade (those are cheap, if you buy used)"...

Alpha is a dead hardware product line, and Integrity boxes and particularly software licenses are cheap(er) (than Alpha). I generally don't recommend (unnecessarily) spending any money upgrading to Alpha. Best to go to what is usually newer and cheaper. The Itanium boxes and license prices blew the bottom out of the used Alpha hardware market, and the general business market and the Tukwila upgrade cycle will likely maintain that delta, if not increase it. (That, and the uplift due to the age of the Alpha stuff and the folks that can't get to Itanium...)

Put another way, why upgrade from twenty year old gear to ten year old gear, when you can get five year old gear? And the newer gear is probably cheaper.

A VAX emulator has a decent shot of being sufficiently fast as compared with a real VAX, if not faster, if your organization wants or needs to defer the migration costs further down the road; to "kick the can", as it may.

And do look for alternatives and applications and platforms; that's simple due diligence.

But this is off-topic, so I'm out.
oldskul
Advisor

Re: VAX/VMS V7.1 Large Directory Files

Thank all of you. I appreciate all the answers about the directory structure (deletes versus inserts, etc.) which I have copy/pasted into our weekly meeting agenda. I also appreciate the constant reminder, if it's a VAX, why bother. I wake up with that same thought on my mind every single day. As I said, we have two integrity boxes in the closet. I believe the problem is the 4GL we use is also so old it cannot easily be migrated. For me, easy or not, migration happens, the longer the time span, the harder it gets. So now we face a very hard migration. Not impossible, just taxing. Nothing has changed here for 14 years. I've been here less than two. One baby step at a time.
oldskul
Advisor

Re: VAX/VMS V7.1 Large Directory Files

And how do you get paragraph breaks in your responses? Copy/Paste from someplace else? :) Tks.
Hoff
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

To get a paragraph break using the Safari or Firefox browsers, add a blank line.

You may be using Microsoft tools and Microsoft Internet Explorer on Microsoft Windows?

If so, then I might guess you've Microsoft Office involved here somewhere, and potentially cutting and pasting and possibly composition, as that package tends to do wacky things with line wraps around cut-and-paste. (I don't even use Microsoft Windows or Office, and I've hit that case.) Use Wordpad in its stead; as your compose pane, and cut-and-paste from there.

Yes; there are various problems here. The planned ITRC UI "upgrade" will improve the forums from "rank disaster" all the way up to "corporate-level UI embarrassment", too.

This ITRC forum stuff works "normally" (read: no worse than usual) on Mac OS X using Safari browser, and using cut-and-paste from the "gun-slit" text entry box as needed. (And you can drag the text box bigger in Mac Safari.)
Andy Bustamante
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files


I once supported an browser based application that created and deleted html files on a regular basis. The developers used a single directory and we purged files after 30 minutes. This was fine on small sites, larger customers wound up with directory entries in the 1000's of blocks where a delete pass would saturate the disk for hours. (Try $MONTOR DISK/ITEM=QUE).

Breaking the logical into multiple directories and having a batch job update the logical on the fly led to a world of improvement. This let me delete one smaller directory that wasn't in customer use. You can spread the logical over multiple disks or license DEC-RAM if you need more I/O bandwidth.

I couldn't get approval to deploy the update to 7.2, so I updated a customer in conjunction with other travel as a "test" case. They posted how a review and mentioned how much faster the system was on a customer mailing list and suddenly the policy was how quickly can we deploy.
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
tsgdavid
Frequent Advisor

Re: VAX/VMS V7.1 Large Directory Files

THANK YOU, THANK YOU, THANK YOU!

to John Gillings for this information and your suggested delete procedure.

>>>
But to really see poor performance, consider the DELETE command. You start by deleting the files at the beginning of the file, until you empty the first bucket. Since empty buckets aren't allowed, you then copy the 2nd to the first, and cascade to the end of the file. Now you proceed to empty the 2nd bucket (which is now the first), and repeat copying the entire directory every dozen or so files. That's why it takes so long.
>>>

We have had a problem related to this for a LONG time with a cleanup procedure that runs every day. One directory takes almost 5 hours to delete the older files. This should cut this time drastically!

Dave
Hein van den Heuvel
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

Dave, you have not been paying attention.. we've known and written about how to fix this for decades! :-).

You may also want to get out DFU DELE/DIRECTORY.
Mind you, you may need to first save away what you would like to retain in a fresh directory.


In comp.os.vms search for "hein delete sort"
And you'll find the skeleton script below.

Btw... Now that the XQP shuffles up and down with ACP_MAXREAD it could potential add value during the shuffle by peaking into the buffers. If it 'sees' the opportunity to compact the whole buffer by a block or so then it could do so and reduce the total blocks to be shuffled for that case and future shuffles.
Possible drawbacks
1) some slack / fill space is nice for files which come and go.
2) if the move gets interrupted the result will be messy / messier.

Hmmm, this is probably best left as a conscious decision as implemented by DFU DIRECTORY/COMPRESS/FILL versus an automatic, uncontrolled action by the XQP.

fwiw,
Hein


http://groups.google.com/group/comp.os.vms/search?group=comp.os.vms&q=hein+delete+sort&qt_g=Search+this+group

$if p1.eqs."" then goto help ! Hein van den Heuvel, DIGITAL, Jan-1995
$directory/out=sys$Login:delete.tmp/col=1 'p1
$sort /key=(pos:1,siz:-1,desc) sys$Login:delete.tmp sys$Login:delete.tmp
$open/read/error=clean_up sorted sys$Login:delete.tmp
$init:
$read/end=clean_up sorted f
$l2 = f$len(f)
$if f$loc(";",f).eq.l2 then goto init
$files = f
$loop:
$read/end=done sorted f
$l1 = f$len(f)
$if f$loc(";",f).eq.l1 then goto loop
$if l2.lt.200
$ then
$ files = files + "," + f
$ l2 = l2 + l1 + 1
$ else
$ delete 'p2' 'files'
$ files = f
$ l2 = l1
$endif
$goto loop
$done:
$delete 'p2' 'files'
$clean_up:
$close/nolog sorted
$delete sys$Login:delete.tmp;,;
$exit
$help:
$type sys$input
Usage:
P1 (required) = wild card file spec, with optional DIR selection qualifiers
P2 (optional) = delete command qualifiers (/LOG or /CONFIRM)
$exit

RBrown_1
Trusted Contributor

Re: VAX/VMS V7.1 Large Directory Files

Since I am also stuck at V7.1 (on old hardware too!), your question interested me.

I threw together the attached command file to give some rough measure of speed of mass file creations and deletions. Whether or not I can find a newer version of VMS running on similar old hardware is a different question.

Other replies on this topic seem to indicate that the operation measured by this command file is not fixable by an upgrade anyway. Oh well.

Have fun!
RBrown_1
Trusted Contributor

Re: VAX/VMS V7.1 Large Directory Files

Evidently, ITRC forums did me a favour by declining to include my attachment.
Hoff
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

Gazillions of versions of the file 0.txt are close to the tightest you can pack stuff into a directory. If you want to fall off the directory cache, differing and (much) longer filenames are the usual choice.

And directories and individual files will make for stinky application databases, and no amount of tweaks to caching nor to ODS will help with that; not to the degree that using the proper tools would.

There are folks that have posted here that have stashed hundreds or thousands of blocks of filenames into a single directory on a daily basis, for instance, and who ever looks at that cruft? Stacking up gazillions of log files is a common problem, and that then leads to a look at compression or data reduction, or both, or just shutting off the log file creation during normal operations.

Yes; they've had problems with using a directory as a database.
P Muralidhar Kini
Honored Contributor

Re: VAX/VMS V7.1 Large Directory Files

John,

>> Note we've been promised a DCL DELETE/TREE for a few decades now.
>> Maybe it's in V8.4?
Yes, its is.
OpenVMS V84 provies the DELETE/TREE option.

Regards,
Murali
Let There Be Rock - AC/DC
oldskul
Advisor

Re: VAX/VMS V7.1 Large Directory Files

There really isn't a resolution to this as I only asked for information, which I did receive. Thank you everyone.