Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Some directories are hardly accessible.

 
SOLVED
Go to solution
yaron1
Advisor

Some directories are hardly accessible.

Hi,

I have a disk where in some directories, thousands and at times tens of thousands of small files are created. Naturally the INDEXF.SYS for the disk is huge, about 112,000 blocks. The directory files (*.DIR) where the small files are created are big too - nearly 2000 blocks each. I encountered problems with jobs that do I/O on the disk can almost do nothing (but no error message). I suspect fragmentation so I started to clean those directories but that takes too slow. This interesting point to note here is that if no process is deleting files on the directories the DIR command works quite good but if I run my clean job the response for DIR command will be so slow that it’s impractical. Another thing, I can create new files in the directory.
It’s VAX with OpenVMS V7.1

What can be the cause of it? What the best remedy beside initialize the disk and load from tapes?

Thanks.
12 REPLIES
Oswald Knoppers_1
Valued Contributor

Re: Some directories are hardly accessible.

You could use DFU for deleting large directories, alsu DFU can compress your .DIR files.

DFU can be found at www.digiater.nl

Oswald
Jon Pinkley
Honored Contributor
Solution

Re: Some directories are hardly accessible.

RE:"What can be the cause of it?"

The design is the cause. VMS directories are not designed for 10s of thousands of files.

RE:"What the best remedy beside initialize the disk and load from tapes?"

If at all possible, spread the files into more directories.

Background: VMS directories are ordered lists of filenames, and for each unique filename, a list of version numbers with the associated file id. When names are inserted or deleted from the beginning of the directory, once there is either no room, or an empty block, the blocks of the directory must be moved to make room. So if you have a 2000 block directory file, it is possible that many of these blocks will need to be copied. While that is in progress, no other operations can make changes to the directory.

Sorry for the bad news, but I know of no plans to change the design.

The only effect a fragmented disk will have is to prevent a directory from expanding, as when that occurs a new contiguous piece must be located, and the contents copied to the new location. If there isn't enough contiguous space, you will get an ACP file create failed message.

Reinitializing the disk or compressing the directories won't help much. If you have 2000+ block directories, file insertions and deletions are going to be slow.

Jon
it depends
labadie_1
Honored Contributor

Re: Some directories are hardly accessible.

You should avoid having too many files in a directory, may be by having a search list.

I suppose that your appli puts files in a directory named disk$appli,
and that your appli is started every morning and shut every evening.

May be you should do the following, define disk$appli as a search-list

The following will automagically roll

def disk$appli disk:,-
disk:,-
disk:,-
disk:,-
disk:,-
disk:,-
disk:

Of course you have to create your directory <.monday> and so

You can of course use a little dcl to have a search-list with many more
elements (day of month comes to mind), and make it roll.

On monday you can quietly move the file in <.tuesday> and other
Jan van den Ende
Honored Contributor

Re: Some directories are hardly accessible.

Yaron,

really, the thing to do is redisgn the application(s).

Such big directories are inherently VERY inefficient, because in the design it was never considered to be used this way.
MANY operations which add or delete somewhere in the beginning of the (alphabetically ordered) directory cuases ALL the rest of the directory to be re-written.
And if the DIR file had to be extended, first a new, bigger file has to be allocated contiguously, then the whole content has to be copied, and the original DIR has to be deleted. Very IO intensive, ie, very time consuming. And NO way to use caching!.

SO, it is MUCH better to devise SOME way to split up the directory in (several, maybe many) (sub-?) directories.

One rather easy way is to make the DIR a searchlist, and every time a "reasonable" number of files are created (monthly? dayly?, perhaps hourly?) add a new one as first translation of the list.
Nwe files will always be created there, and existing files will be found from anywhere in the list.

(of course, some cleanup or consolidation will also be needed to keep the list itself reasonably small. (I seem to remember that I once ran into a limit of 80 translations, but maybe that was because the total translated string length exceeded some value. No way to trace that back now).

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
yaron1
Advisor

Re: Some directories are hardly accessible.

Thanks for the answers.

I didnâ t design or developed this app, that was done in the 80s. And Iâ m still new to this app.

Jon wrote << The only effect a fragmented disk will have is to prevent a directory from expanding, as when that occurs a new contiguous piece must be located, and the contents copied to the new location. If there isn't enough contiguous space, you will get an ACP file create failed message>>
That was the reason I wrote << I can create new files in the directory>> so fragmentation is not the problem here, I had that problem once. (although many small files can contribute to fragmentation).

Assuming that I delete all the files I guess that would be a good thing to delete the directory files and recreate brand new ones with the CREATE/DIRECTORY command. I wonder if I should use the qualifier /ALLOCATION qualifier.

Thanks.
Jan van den Ende
Honored Contributor

Re: Some directories are hardly accessible.

Yaron,

OK, you did not design it, but I conclude that you are somehow responsible for keeping it running.

First, find out HOW and WHERE the files are created.
If it is in an embedded image, using a hard-code reference to some physical location, and you have (for whatever reason) no possibility to change the source; then I would say you are out of luck.

OTOH, if the code in any way uses a LOGICAL file location, then just a little DCL will solve things for you!

eg, assume the program code uses an external file assignment: OK, spoof that.

eg, assume the program creates its files in APPDIR; you are home again

even if the files go top APPDISK:[TODIR.DATA}; you can DCL around that.

If the applic does SET DEFAULT and uses the default location; again all set to go.

So,
tell us what description is applicable to you, and we will probably be able to bend it in better shape.

Just tell us.

PS it might be usefull to disclose your architecture and VMS version, and any used 3rd party tooling.

Proost.

Have one on me.

jpe


Don't rust yours pelled jacker to fine doll missed aches.
Robert Gezelter
Honored Contributor

Re: Some directories are hardly accessible.

Yaron,

I would have to check if 7.1 had the /ALLOCATION qualifier, I do not have a 7.1 system accessible from where I am at this instant.

As labadie noted, search lists when used correctly, do have to potential to make this a more manageable application. If one is careful, one can be even transparent to the application. Consider moving archival data to a secondary point in the search list.

Also, a collateral question is whether this system is using disk caching. Disk caching can significantly improve performance in these situations, for a variety of reasons (it also helps other applications by removing their traffic from the disk).

Also consider whether outside performance expertise can be of assistance. While the directory churn is a good candidate for a performance issue, there may be other issues [Disclosure: My firm provides such services, as do other regular contributors to this forum].

- Bob Gezelter, http://www.rlgsc.com
yaron1
Advisor

Re: Some directories are hardly accessible.

Hi,

Reference to the directory locations are both in images and in DCL command files. Long term solutions are not the priority now. The concern now is to get out of this situation. You are absolutely right that a long term solution is needed here. After I deleted very slowly files on the directory, the server admin used DFU to compress the directory with good results. The cleaning is going much faster now. I intend to finally delete the directory and CREATE is again.

I am only the poor guy who supports the app, not the server admin and mot a manager who decides anything. So getting external expertise would be great but not in my hands. With these VAX apps, owners just want to keep them running somehow.

Thanks for your replies.
John Gillings
Honored Contributor

Re: Some directories are hardly accessible.

yaron,

When cleaning up large directories, if you're removing a directory entry by deleting a file, or renaming it into another directory, try to work backwards from lexically higher file names down (ie: Z to A rather than A to Z). This will be significantly faster than natural order.

$ DIRECTORY/NOHEAD/NOTRAIL

can be used to generate lists of filenames. PIPE the output into SORT to invert the order.

>recreate brand new ones with the
>CREATE/DIRECTORY command. I wonder if I
>should use the qualifier /ALLOCATION
>qualifier.

This won't help. The problem is simply too many files. If you reduce the number of files the directory, will perform well again. /ALLOCATION is unnecessary, as directories are never shrunk. They will retain the maximum allocation. DFU has a "compact" operation but you DON'T want to use it, as you'll get better performance by having files distributed across the allocated space.

> Reference to the directory locations are
>both in images and in DCL command files.

Maybe now you see the value in always using logical names to reference directories? Even if the directory names are hard coded, it may still be possible to create a logical name search list using the device name, and multiple concealed devices which will appear to the application as a single directory, but to the file system as many. If you want to persue this path, please post an example of the file specification used by the application.
A crucible of informative mistakes
Ian Miller.
Honored Contributor

Re: Some directories are hardly accessible.

Do get DFU V2.7A for your VAX. You don't have to install kit. Just extract DFU.EXE from the kit and use it. Its good for searching for large directories as well as fixing them.

Once a directory file gets above 127 blocks (on that version of VMS) you definitely have a problem.
____________________
Purely Personal Opinion
Wim Van den Wyngaert
Honored Contributor

Re: Some directories are hardly accessible.

Wim Van den Wyngaert
Honored Contributor

Re: Some directories are hardly accessible.

If you don't have the time,
red the answer of Mark Hopkins near the end.

Wim
Wim