Operating System - OpenVMS
1752292 Members
4631 Online
108786 Solutions
New Discussion юеВ

Re: File attributes: Allocation: 5000 - VMS 7.3-2

 
SOLVED
Go to solution
Robert Gezelter
Honored Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Willem,

"... creation of the directory file will allocate 5000. Each extension will add 5000 bytes." should have been "blocks".

I am sure that it is a typographical error, and I merely point it out to keep the thread correct.

- Bob Gezelter, http://www.rlgsc.com
Jorge Cocomess
Super Advisor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Hi,

The strange thing is that the application doesn't really give you a whole lot of error messages, it just errored out. Once the application or sub-routines errored out, I go to this directory and purge out as much as I can then I can resume my processing again.

The disk was built on 2 x 72GB Mirrored. Currently, I have about 60GB available on this disk.

We do have license for DFU -- However, this disk being accessed 24 x 7, would it make more difficult for DFU utility to run since it's being accessed 24 x 7??

I could create a new LUN and move everything over to it. Would you gentlemen think this would be wise to do so??

Any ideas what else I can do??

Thank you.

Jorge
Robert Gezelter
Honored Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Jorge,

A new LUN will not resolve the problem. The problem is how the information within the volume is being used.

The problem can be addressed in a variety of ways. The overall volume being active is not a problem, although it does complicate things. Certainly, re-organizing this directory can be done with the only requirement being that this directory is quiesced (and I can think of a few ways around that, but they DO depend on the details).

It may be good idea to retain more senior expertise to examine this problem and identify a solution [Disclosure: We do provide consulting assistance in this type of matter].

- Bob Gezelter, http://www.rlgsc.com
Hein van den Heuvel
Honored Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

>> The strange thing is that the application doesn't really give you a whole lot of error messages, it just errored out.

Too bad. No log with a real error huh?

>> Once the application or sub-routines errored out, I go to this directory and purge out as much as I can then I can resume my processing again.

1) And when it errors out, there is still free space right?

2) There wouldn't be file with version number approaching 32K?
$mcr dfu searc/vers=mini=30000 dev:

3) I suppose there could be a (very unlikely) directory corruption. Create a fresh directory, rename or set file/enter all files into it. Rename and blow away old directory. Rename fresh directory to correct name.

4) you wouldn't happen to be out of file headers, adn this directory is just a red-herring... any file create would fail:

$MCR DFU REPORT dev:
"Maximum # files " vs "Header count"
vs "Free headers"

>> The disk was built on 2 x 72GB Mirrored. Currently, I have about 60GB available on this disk.

So that's now, after cleaning.
What about before?
It's hard to imagine there would not be a free chunk of a few MB.

MCR DFU REPORT dev:
***** Free space statistics (from BITMAP.SYS) *****
"Largest free extent"

>> We do have license for DFU -- However, this disk being accessed 24 x 7, would it make more difficult for DFU utility to run since it's being accessed 24 x 7??

DFU needs no license.

DFO is usually run while the system is in use.

>> I could create a new LUN and move everything over to it. Would you gentlemen think this would be wise to do so??

Not if you do not rally know what is wrong.

Hein.
John Gillings
Honored Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Jorge,

>The strange thing is that the application
>doesn't really give you a whole lot of
>error messages, it just errored out.

If you can't fix the application to issue proper error messages, then next time this happens, instead of purging the directory try some diagnosis first. Check the used size of the directory. Is it full? Try creating some files from DCL:

$ CREATE dev:[yourdir]X.X

If that works, try different file names, perhaps lexically less than the first file in the directory, lexically after the last, and some spread around in the file. See if you can get a sensible error message from CREATE.

Look for files with abnormally large numbers of versions, or abnormally high version numbers.

Try creating another directory on the same disk, and filling it up with files. How many files with unique names can you create before you get a DIRALLOC error? How big is the directory at that point? (use DFU to clear out the directory otherwise it will probably take a week!)

My guess would be DIRALLOC - the disk is too fragmented to extend the directory beyond its 5000 block allocation.

Long term the only solution is to defrag the disk. Short term you can purge the directory, or create multiple directories and link them together with a search list. Put the most empty directory up the front of the list, that's where the new files will be created, but the application will still see existing files in directories further down the list.

DFU DIRECTORY/COMPRESS might help, but consider that a 5000 block directory doesn't perform terribly well, especially for DELETE.
A crucible of informative mistakes
Art Wiens
Respected Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Seeing as how that's a nice neat "5000 blocks", could that be the default cluster size of the disk? If multiple files are being created at the same time, and if they all go for another extent at the same time, there might not be enough free space, yet when you check, there still seems to be some.

Just a thought,
Art
Jorge Cocomess
Super Advisor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Since the time I posted this question, the file size is now "Total of 1 file, 281/5000 blocks". It translate to 1972 files under this directory.

So, once it hit 5000 mark, I should see some issues, right??

Thanks and everyone. If you're wondering, I will take care of the points shortly.

Jorge
Hein van den Heuvel
Honored Contributor

Re: File attributes: Allocation: 5000 - VMS 7.3-2

>> So, once it hit 5000 mark, I should see some issues, right??

Well, yes, no, and maybe.

As you work your way up to the 5000, more and more file apparently are created, each needing 1 (at least) file header, (you can run out of those) and 1 (at least) allocation cluster (you can run out of those).

When you cross the 5000 mark, and more room is needed then the file system (F11X/SHFDIR) will allocate a new, larger, contiguous, file and copy all blocks over.

Obviously that will cost some energy, but it might not be an 'issue'. That particular issue is only marginally worse from adding a directory entry with a low alphabetical value into a full directory block in which case the same IOs occur, but just within a single file.

It becomes a real (DIRALLOC) issue if at that point not enough contiguous free space is available.

For your education & entertainment you may want to try $DUMP/DIRECTORY/BLOCK=(START=x,COUNT=1) x.DIR
Compare with $DUMP/RECORD=(START=y,COUNT=z) x.DIR (look at the RFA's!)

Cheers,
Hein.
John Gillings
Honored Contributor
Solution

Re: File attributes: Allocation: 5000 - VMS 7.3-2

Jorge,

>> So, once it hit 5000 mark, I should see some issues, right??

You will potentially see issues long before you get to 5000 blocks (though you shouldn't have any trouble *creating* files).

Remember that on a Files-11 structured disk, directories are a kind of illusion layed over the top of the file system, to help HUMANS find files. Since most humans have trouble dealing with thousands of files in one big chunk, the directory mechanism was never designed to cope with huge numbers of files.

It wasn't very long ago (V4?) when directory files were hard limited to 128 blocks. Although V5 increased the limit, it wasn't until very recently (V7?) that the SEVERE performance knee at 128 blocks was raised. Despite that, for most practical purposes it's advisable to keep your directory files below 1000 blocks.

What problems are you likely observe? As directory files get bigger, searching for, and inserting files in the directory is likely to get significantly slower, as the search mechanism requires more and more sequential scanning to find stuff in the directory.

However, the BIG hitter for preformance is DELETE. Consider DELETE *.*;* for a directory less than 128 blocks this will be reasonably fast. Maybe a few minutes, but as the directory file size increases, the time increases geometrically, as does the CPU and I/O load. Once a directory is over 1000 blocks, the command can literally take DAYS to complete. Although DFU can solve this for deleting the ENTIRE directory, it won't help for mass deletions of some of the files, say "DELETE A*.DAT;*"

Some might say this is a flaw in the way directories are implemented, but it's more a reflection on the changes in the way directories are being used. Files-11 directories are fine for their intended purpose, but they don't scale up very well.

If you have large numbers of files to store, you may want to consider alternatives to a single directory. Perhaps create some substructure, or use search lists to spread the files across multuple directories (or even multiple devices).
A crucible of informative mistakes