1753936 Members
9416 Online
108811 Solutions
New Discussion юеВ

Re: Drive Space Mystery

 
SOLVED
Go to solution
Hein van den Heuvel
Honored Contributor

Re: Drive Space Mystery

Ever heard about this wonderful newfangled thing called google?

Google: dfu openvms download

http://www.digiater.nl/dfu.html

Hein.
BRUCE BROWN_2
Advisor

Re: Drive Space Mystery

Ever heard about this wonderful newfangled thing called google?

Google: dfu openvms download

http://www.digiater.nl/dfu.html

Hein.

Your fun at Parties I expect.
Steven Schweda
Honored Contributor

Re: Drive Space Mystery

> Your fun at Parties I expect.

That's "you're". (And he's probably more fun
than I am.)

This new Inter-Web thing sure is neat.
Jon Pinkley
Honored Contributor

Re: Drive Space Mystery

Bruce,

You really should get and install DFU, it is a "Swiss army knife" for disk/file operations, and was written as an internal tool at DEC, and has since been released as unsupported software.

Until you install it, you can use directory to find the most likely candidates.

Do you have anything than is monitoring usage of your disks? If so, was there just a gradual increase in allocation or was there some point in time where the rate of usage changed drastically? It appears there are many connections to files on the disk, from "Cluster size 128 Transaction count 4474", so there 4474 "open" files from the node you did the show device/full from. (That doesn't mean 4474 unique files, there may be one file opened 4400 times.) The reason I point this high transaction count out is that if the allocation "extend" size has recently been changed in a effort to reduce disk fragmentation, it could contribute to the "over allocation". (This was one of the things in Jan's list).

Given "Grand total of 628 directories, 53594 files, 53296924/99130624 blocks", the average allocation per file on that disk is 99130624/53594 or 1850 blocks. (The directories are included in the total files, so that information isn't needed for this calculation.) The "used" amount is 53296924/53594 or 994 blocks. So as Hein pointed out, the clustersize of 128 by itself can't explain the over allocation, since "worst case", only 127 blocks per file could be over allocated.

The question is, is the over allocated space spread over a lot of files, or are there a few files with a gargantuan over allocation. This can happen if a file is being written to and never closed or flushed, the EOF will remain at the value that existed when the file was opened. So you can have a file with 0/10000000 that won't be found with the command:

$ directory/select=(size:min:10000)

But will be found with

$ directory/select=(allo:min:10000)

So until you get DFU installed, you may want to do something like:

$ directory/select=(allocated:min:1000000)/size=all/wid=(size:8)

to see if any unexpected files pop out.

Good luck,

Jon
it depends
Jon Pinkley
Honored Contributor
Solution

Re: Drive Space Mystery

Well, I should have tried the commands I recommended before I posted, so now I have to eat crow.

The command

$ directory/size=allocation/sel=size:min:1000000/width=(size:8) domaindata01:[000000...]

will display files that have at least 1000000 blocks allocated.

I rarely use that functionality of directory, since I normally use DFU for that.

This command will display files where the differece between allocated and used exceeds 10000 blocks:

$ directory /select=size=unused=10000 /size=all/width=(size:8) domaindata01:[000000...]


Sorry for the incorrect info I posted.

Jon
it depends
John Gillings
Honored Contributor

Re: Drive Space Mystery

Just to wind back to the original question - why there may be a discrepancy between what the DIRECTORY command says and what SHOW DEVICE says about the contents of a particular disk.

Apart from timing issues and "free space drift" inherent in the design of a cluster wide file system, even assuming a frozen disk with no file system errors, the two values may differ in either direction.

The directory structure is in some ways optional, it's just a convenient way for humans to catalogue and find files. There is no structural or syntactic reason that a particular file must be entered into a directory, and there are sometimes good reasons that it is not. You can therefore have "dark disk space" consumed that can't be seen from the DIRECTORY command. ANALYZE/DISK will these files as "LOST", but that's not necessarily true - it depends on the files. The presence of such files will manifest as SHOW DEVICE claiming less free space than DIRECTORY, and both may be correct.

In the other direction, a given file may be entered in multiple directories. The simplest example being VMS$COMMON.DIR and all the SYSCOMMON.DIR alias directories, one for each node booting from a common system disk. In this case, the same files will be counted multiple times by DIRECTORY. This means DIRECTORY will report more space used than SHOW DEVICE. Indeed, DIRECTORY may claim more space used than the capacity of the volume (try DIR/SIZE=ALL/GRAND of a VMS installation CD). Again it will be correct. It's even possible to create a loop in a directory structure, which could cause the used space to appear infinite, except for DIRECTORY depth limits.

As others have mentioned, you can use DFU to help clarify the situation. It helps because DFU effectively ignores the directory structure of the disk, just looking at the files.
A crucible of informative mistakes
Willem Grooters
Honored Contributor

Re: Drive Space Mystery

Now John mentioned free space drift, and that direcrtories are not a requirement, I thought of the following.

Some languages and utilities offer facilities for "temporary files", that are not necessaraly visible on DIR, sometimes eventually without name. Space will be allocated during the lifetime of the image, and once the image is gone, space is returned.

SHOW DEVICE will take these into account (I think SHOW DEVICE reads the device bitmap to determine the free space) where DIRECTORY reads the entries in INDEXF.SYS to return allocated and occupied space. Since these temporary files are not administered in INDEX.SYS, you won't see them in DIR, not will their allocated space be mentioned.
Willem Grooters
OpenVMS Developer & System Manager
Robert Gezelter
Honored Contributor

Re: Drive Space Mystery

Willem,

WADR, I must correct what I believe to be an inaccuracy in your last posting.

Temporary files ALWAYS have an entry in [000000]INDEXF.SYS. What they frequently lack is an entry in a directory.

SHOW DEVICE works from the actual free space on the volume (I do not have the time to check the sources, or dig through the manual at this instant, but my recollection is that it uses the GETDVI system service or equivalent; the same way that F$GETDVI has the FREEBLOCKS parameter).

DIRECTORY can do a wildcard walk of the directory tree, but that will multiply list files that are in multiple places in such a walk, and will miss those that are not in any directory.

ANALYZE/DISK/LIST will produce a listing of the INDEX file directly.

- Bob Gezelter, http://www.rlgsc.com
Guenther Froehlin
Valued Contributor

Re: Drive Space Mystery

First the free block count from SHOW DEVICE in a cluster is not reliable. This is a "day-one feature" caused by the way the free blocks count is kept conveniently just for the SHOW DEVICE output (in a lock value block) and is not used by the file system when allocating blocks. Only after a SET VOLUME/REBUILD=FORCE it may be close.

Second, files created by applications as temporary files are not entered into a directory. Hence a "DIRECTORY" command doesn't list them.

Third there could be "some" other volume structure problem which could result in inaccurate free/allocated blocks information. ANALYZE/DISK/LOCK shows them.

Fourth always look for the "allocated" blocks when looking for free space information. "DIRECTORY/SIZE=ALLOCATED [000000...]".

Fifth...is there a problem with free space and applications fail with DEVFUL?

/Guenther