1827675 Members
4005 Online
109967 Solutions
New Discussion

Re: Drive Space Mystery

 
SOLVED
Go to solution
BRUCE BROWN_2
Advisor

Drive Space Mystery

The total device volume is 50 Gb. When I do a dir command from the Master File Directory [000000] down it shows me that there are approximately 25 Gb worth of files. (dir /size DOMAINDATA01:[000000.])

I then did a "sh/dev domaindata01 /full" and it shows approximately 3 Gb free space. That leaves 12 Gb unaccounted for. I'm not sure what to make of that.

Thank You

Bruce
18 REPLIES 18
Steven Schweda
Honored Contributor

Re: Drive Space Mystery

You mean

dir /size = all DOMAINDATA01:[000000...]

?

(And we should assume that DOMAINDATA01: is a
physical disk, not some odd-ball rooted
logical name?)

On a bad day, ANAL /DISK /REPAIR may change
things, but usually not so much.


From time to time, actual commands with
actual output can be less mysterious.
Jan van den Ende
Honored Contributor

Re: Drive Space Mystery

Bruce,

like Steven wrote, actual commands and actual output leaves less to guess.

Here, without more info, there is a fair chance that the difference between ALLOCATED and USED space MIGHT explain things.
Especially if there are MANY files, and the drives have a BIG clustersize, that can take up a lot of space.
Further, if there are OPEN files (at the moment of the SHOW and/or DIR commands) that COULD have an influence.

The least we would need is the output of
$ SHOW LOGICAL DOMAINDATA01 and
$ DIR/SIZ=ALL DOMAINDATA01:[*...]

Show us the output; in a .TXT attachment

(and regular readers already know that I absolutely DETEST the :[000000...] construct!!)

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
BRUCE BROWN_2
Advisor

Re: Drive Space Mystery

SHOW LOGICAL DOMAINDATA01
"DOMAINDATA01" = "$1$DGA12:" (LNM$SYSTEM_TABLE)


$ DIR/SIZ=ALL DOMAINDATA01:[*...]

Grand total of 628 directories, 53594 files, 53296924/99130624 blocks

Sorry I cannot give you a listing of the files. We are running a healthcare system.
Steven Schweda
Honored Contributor

Re: Drive Space Mystery

"99130624 blocks" looks to me like about 50GB.

> Especially if there are MANY files, and the
> drives have a BIG clustersize, that can
> take up a lot of space.

Yup. HELP INITIALIZE /CLUSTER_SIZE

pipe show devi /full DOMAINDATA01: | search sys$input "Cluster size"

> Sorry I cannot [...]

/GRAND is good enough for me.
Hein van den Heuvel
Honored Contributor

Re: Drive Space Mystery

Get yourself a copy of DFU.
No ifs or buts, just get it.

Now:
$DEFINE DFU$NOSMG 1 ! Personally pet hate
$DFU REPORT DOMAINDATA01:
Pay extra close attention to the lines:

1) Total used/ allocated size
2) Total files (ODS2 / ODS5)
3) Directory files

Compare with DIRETORY/SIZE=ALL output if you feel like that.

Most like reason for 'missing block'.
1) used ($DIR/SIZE default) vs allocated space
1A) Over-allocated, with an eye on the future
1B) Simple cluste4r size oundup effect
1C) "Used" size not relevant for access method
-
2) Files NOT entered in a directory (perfectly legal, temp files)
:
99) corrupted disk structure
.

And, while I somewaht understand Jan's objection against [000000...] you may want to keep on using that over [*...] in order to automatically count in the space used by files in [000000].

Witness:
$ dir sys$sysdevice:[000000...]*.sys/grand/size
Grand total of 4 directories, 13 files, 1436678 blocks.
$ dir sys$sysdevice:[*...]*.sys/grand/size
Grand total of 3 directories, 4 files, 1101037 blocks.
$ dir sys$sysdevice:[000000]*.sys/grand/size
Grand total of 1 directory, 9 files, 335641 blocks.

Good luck!
Hein.






BRUCE BROWN_2
Advisor

Re: Drive Space Mystery

CERT>SHOW LOGICAL DOMAINDATA01
"DOMAINDATA01" = "$1$DGA12:" (LNM$SYSTEM_TABLE)
CERT>pipe show devi /full DOMAINDATA01: | search sys$input "Cluster size"
Cluster size 128 Transaction count 4474

DIR/SIZ=ALL DOMAINDATA01:[*...] /grand

Grand total of 627 directories, 53644 files, 53183165/98900352 blocks.
BRUCE BROWN_2
Advisor

Re: Drive Space Mystery

Get yourself a copy of DFU.

What is it and where can I find it?
Hein van den Heuvel
Honored Contributor

Re: Drive Space Mystery

Ah, more data came in while replying.

Grand is Grand!

It does not look like clustersize roundup UNLESS... the cluster size is at least 1000+
It is more likely to be 95 or 128.

The worst-case waste would be if every singe file and directory would use just 1 block in the last extent (cluster). To make the missingle blocks work, the cluster size to waste 50M blocks would have to be:

$write sys$output (99130624 - 53296924 - 53594) / ( 53594 + 628 )
844

If the average waste is typically just over 50% of a cluster, not 99%, in which case the cluster size would have to be 2000-ish. Unlikely.

Back to DFU. Just use:

DFU SEARCH /OVER_ALLOCATED=10000 DOMAINDATA01:

hth,
Hein.








Jan van den Ende
Honored Contributor

Re: Drive Space Mystery

Bruce,

>>>
Grand total of 628 directories, 53594 files, 53296924/99130624 blocks
<<<

53296924 blocks used, 99130624 allocated.

99130624 blocks = approx 49.5 GB.

The big difference can come from

- big disk cluster size.
The allocated size is always at least the used size rounded up to the next multiple of clustersize.
- big extend size
A file that needs more space normally grows with the extend size. (rounded up to ...)
The extend size can be defined at INIT time, overruled at MOUBT, overruled per process, overruled at file creation, overruled at file open. In short, not ONE fixed amount.
- a file that has grown much since the last OPEN. The EOF (for sequential, allocated amount, for other files something similar but more complex) only gets written at file CLOSE. (clean example: batch LOG files during the run always show 0/.
- file allocation cache; the system normally has about 10% of free space in pre-allocate cache. Those are not always reported correctly
- (usually the least interesting): an improper dismount does NOT "give back" the previous cache. (But a volume rebuild should cure that)

What is the most important in your case, only you can know.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Hein van den Heuvel
Honored Contributor

Re: Drive Space Mystery

Ever heard about this wonderful newfangled thing called google?

Google: dfu openvms download

http://www.digiater.nl/dfu.html

Hein.
BRUCE BROWN_2
Advisor

Re: Drive Space Mystery

Ever heard about this wonderful newfangled thing called google?

Google: dfu openvms download

http://www.digiater.nl/dfu.html

Hein.

Your fun at Parties I expect.
Steven Schweda
Honored Contributor

Re: Drive Space Mystery

> Your fun at Parties I expect.

That's "you're". (And he's probably more fun
than I am.)

This new Inter-Web thing sure is neat.
Jon Pinkley
Honored Contributor

Re: Drive Space Mystery

Bruce,

You really should get and install DFU, it is a "Swiss army knife" for disk/file operations, and was written as an internal tool at DEC, and has since been released as unsupported software.

Until you install it, you can use directory to find the most likely candidates.

Do you have anything than is monitoring usage of your disks? If so, was there just a gradual increase in allocation or was there some point in time where the rate of usage changed drastically? It appears there are many connections to files on the disk, from "Cluster size 128 Transaction count 4474", so there 4474 "open" files from the node you did the show device/full from. (That doesn't mean 4474 unique files, there may be one file opened 4400 times.) The reason I point this high transaction count out is that if the allocation "extend" size has recently been changed in a effort to reduce disk fragmentation, it could contribute to the "over allocation". (This was one of the things in Jan's list).

Given "Grand total of 628 directories, 53594 files, 53296924/99130624 blocks", the average allocation per file on that disk is 99130624/53594 or 1850 blocks. (The directories are included in the total files, so that information isn't needed for this calculation.) The "used" amount is 53296924/53594 or 994 blocks. So as Hein pointed out, the clustersize of 128 by itself can't explain the over allocation, since "worst case", only 127 blocks per file could be over allocated.

The question is, is the over allocated space spread over a lot of files, or are there a few files with a gargantuan over allocation. This can happen if a file is being written to and never closed or flushed, the EOF will remain at the value that existed when the file was opened. So you can have a file with 0/10000000 that won't be found with the command:

$ directory/select=(size:min:10000)

But will be found with

$ directory/select=(allo:min:10000)

So until you get DFU installed, you may want to do something like:

$ directory/select=(allocated:min:1000000)/size=all/wid=(size:8)

to see if any unexpected files pop out.

Good luck,

Jon
it depends
Jon Pinkley
Honored Contributor
Solution

Re: Drive Space Mystery

Well, I should have tried the commands I recommended before I posted, so now I have to eat crow.

The command

$ directory/size=allocation/sel=size:min:1000000/width=(size:8) domaindata01:[000000...]

will display files that have at least 1000000 blocks allocated.

I rarely use that functionality of directory, since I normally use DFU for that.

This command will display files where the differece between allocated and used exceeds 10000 blocks:

$ directory /select=size=unused=10000 /size=all/width=(size:8) domaindata01:[000000...]


Sorry for the incorrect info I posted.

Jon
it depends
John Gillings
Honored Contributor

Re: Drive Space Mystery

Just to wind back to the original question - why there may be a discrepancy between what the DIRECTORY command says and what SHOW DEVICE says about the contents of a particular disk.

Apart from timing issues and "free space drift" inherent in the design of a cluster wide file system, even assuming a frozen disk with no file system errors, the two values may differ in either direction.

The directory structure is in some ways optional, it's just a convenient way for humans to catalogue and find files. There is no structural or syntactic reason that a particular file must be entered into a directory, and there are sometimes good reasons that it is not. You can therefore have "dark disk space" consumed that can't be seen from the DIRECTORY command. ANALYZE/DISK will these files as "LOST", but that's not necessarily true - it depends on the files. The presence of such files will manifest as SHOW DEVICE claiming less free space than DIRECTORY, and both may be correct.

In the other direction, a given file may be entered in multiple directories. The simplest example being VMS$COMMON.DIR and all the SYSCOMMON.DIR alias directories, one for each node booting from a common system disk. In this case, the same files will be counted multiple times by DIRECTORY. This means DIRECTORY will report more space used than SHOW DEVICE. Indeed, DIRECTORY may claim more space used than the capacity of the volume (try DIR/SIZE=ALL/GRAND of a VMS installation CD). Again it will be correct. It's even possible to create a loop in a directory structure, which could cause the used space to appear infinite, except for DIRECTORY depth limits.

As others have mentioned, you can use DFU to help clarify the situation. It helps because DFU effectively ignores the directory structure of the disk, just looking at the files.
A crucible of informative mistakes
Willem Grooters
Honored Contributor

Re: Drive Space Mystery

Now John mentioned free space drift, and that direcrtories are not a requirement, I thought of the following.

Some languages and utilities offer facilities for "temporary files", that are not necessaraly visible on DIR, sometimes eventually without name. Space will be allocated during the lifetime of the image, and once the image is gone, space is returned.

SHOW DEVICE will take these into account (I think SHOW DEVICE reads the device bitmap to determine the free space) where DIRECTORY reads the entries in INDEXF.SYS to return allocated and occupied space. Since these temporary files are not administered in INDEX.SYS, you won't see them in DIR, not will their allocated space be mentioned.
Willem Grooters
OpenVMS Developer & System Manager
Robert Gezelter
Honored Contributor

Re: Drive Space Mystery

Willem,

WADR, I must correct what I believe to be an inaccuracy in your last posting.

Temporary files ALWAYS have an entry in [000000]INDEXF.SYS. What they frequently lack is an entry in a directory.

SHOW DEVICE works from the actual free space on the volume (I do not have the time to check the sources, or dig through the manual at this instant, but my recollection is that it uses the GETDVI system service or equivalent; the same way that F$GETDVI has the FREEBLOCKS parameter).

DIRECTORY can do a wildcard walk of the directory tree, but that will multiply list files that are in multiple places in such a walk, and will miss those that are not in any directory.

ANALYZE/DISK/LIST will produce a listing of the INDEX file directly.

- Bob Gezelter, http://www.rlgsc.com
Guenther Froehlin
Valued Contributor

Re: Drive Space Mystery

First the free block count from SHOW DEVICE in a cluster is not reliable. This is a "day-one feature" caused by the way the free blocks count is kept conveniently just for the SHOW DEVICE output (in a lock value block) and is not used by the file system when allocating blocks. Only after a SET VOLUME/REBUILD=FORCE it may be close.

Second, files created by applications as temporary files are not entered into a directory. Hence a "DIRECTORY" command doesn't list them.

Third there could be "some" other volume structure problem which could result in inaccurate free/allocated blocks information. ANALYZE/DISK/LOCK shows them.

Fourth always look for the "allocated" blocks when looking for free space information. "DIRECTORY/SIZE=ALLOCATED [000000...]".

Fifth...is there a problem with free space and applications fail with DEVFUL?

/Guenther