- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Drive Space Mystery
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 09:02 AM
03-15-2008 09:02 AM
I then did a "sh/dev domaindata01 /full" and it shows approximately 3 Gb free space. That leaves 12 Gb unaccounted for. I'm not sure what to make of that.
Thank You
Bruce
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 09:37 AM
03-15-2008 09:37 AM
Re: Drive Space Mystery
dir /size = all DOMAINDATA01:[000000...]
?
(And we should assume that DOMAINDATA01: is a
physical disk, not some odd-ball rooted
logical name?)
On a bad day, ANAL /DISK /REPAIR may change
things, but usually not so much.
From time to time, actual commands with
actual output can be less mysterious.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 09:49 AM
03-15-2008 09:49 AM
Re: Drive Space Mystery
like Steven wrote, actual commands and actual output leaves less to guess.
Here, without more info, there is a fair chance that the difference between ALLOCATED and USED space MIGHT explain things.
Especially if there are MANY files, and the drives have a BIG clustersize, that can take up a lot of space.
Further, if there are OPEN files (at the moment of the SHOW and/or DIR commands) that COULD have an influence.
The least we would need is the output of
$ SHOW LOGICAL DOMAINDATA01 and
$ DIR/SIZ=ALL DOMAINDATA01:[*...]
Show us the output; in a .TXT attachment
(and regular readers already know that I absolutely DETEST the
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:03 AM
03-15-2008 10:03 AM
Re: Drive Space Mystery
"DOMAINDATA01" = "$1$DGA12:" (LNM$SYSTEM_TABLE)
$ DIR/SIZ=ALL DOMAINDATA01:[*...]
Grand total of 628 directories, 53594 files, 53296924/99130624 blocks
Sorry I cannot give you a listing of the files. We are running a healthcare system.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:11 AM
03-15-2008 10:11 AM
Re: Drive Space Mystery
> Especially if there are MANY files, and the
> drives have a BIG clustersize, that can
> take up a lot of space.
Yup. HELP INITIALIZE /CLUSTER_SIZE
pipe show devi /full DOMAINDATA01: | search sys$input "Cluster size"
> Sorry I cannot [...]
/GRAND is good enough for me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:12 AM
03-15-2008 10:12 AM
Re: Drive Space Mystery
No ifs or buts, just get it.
Now:
$DEFINE DFU$NOSMG 1 ! Personally pet hate
$DFU REPORT DOMAINDATA01:
Pay extra close attention to the lines:
1) Total used/ allocated size
2) Total files (ODS2 / ODS5)
3) Directory files
Compare with DIRETORY/SIZE=ALL output if you feel like that.
Most like reason for 'missing block'.
1) used ($DIR/SIZE default) vs allocated space
1A) Over-allocated, with an eye on the future
1B) Simple cluste4r size oundup effect
1C) "Used" size not relevant for access method
-
2) Files NOT entered in a directory (perfectly legal, temp files)
:
99) corrupted disk structure
.
And, while I somewaht understand Jan's objection against [000000...] you may want to keep on using that over [*...] in order to automatically count in the space used by files in [000000].
Witness:
$ dir sys$sysdevice:[000000...]*.sys/grand/size
Grand total of 4 directories, 13 files, 1436678 blocks.
$ dir sys$sysdevice:[*...]*.sys/grand/size
Grand total of 3 directories, 4 files, 1101037 blocks.
$ dir sys$sysdevice:[000000]*.sys/grand/size
Grand total of 1 directory, 9 files, 335641 blocks.
Good luck!
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:16 AM
03-15-2008 10:16 AM
Re: Drive Space Mystery
"DOMAINDATA01" = "$1$DGA12:" (LNM$SYSTEM_TABLE)
CERT>pipe show devi /full DOMAINDATA01: | search sys$input "Cluster size"
Cluster size 128 Transaction count 4474
DIR/SIZ=ALL DOMAINDATA01:[*...] /grand
Grand total of 627 directories, 53644 files, 53183165/98900352 blocks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:20 AM
03-15-2008 10:20 AM
Re: Drive Space Mystery
What is it and where can I find it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:22 AM
03-15-2008 10:22 AM
Re: Drive Space Mystery
Grand is Grand!
It does not look like clustersize roundup UNLESS... the cluster size is at least 1000+
It is more likely to be 95 or 128.
The worst-case waste would be if every singe file and directory would use just 1 block in the last extent (cluster). To make the missingle blocks work, the cluster size to waste 50M blocks would have to be:
$write sys$output (99130624 - 53296924 - 53594) / ( 53594 + 628 )
844
If the average waste is typically just over 50% of a cluster, not 99%, in which case the cluster size would have to be 2000-ish. Unlikely.
Back to DFU. Just use:
DFU SEARCH /OVER_ALLOCATED=10000 DOMAINDATA01:
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:23 AM
03-15-2008 10:23 AM
Re: Drive Space Mystery
>>>
Grand total of 628 directories, 53594 files, 53296924/99130624 blocks
<<<
53296924 blocks used, 99130624 allocated.
99130624 blocks = approx 49.5 GB.
The big difference can come from
- big disk cluster size.
The allocated size is always at least the used size rounded up to the next multiple of clustersize.
- big extend size
A file that needs more space normally grows with the extend size. (rounded up to ...)
The extend size can be defined at INIT time, overruled at MOUBT, overruled per process, overruled at file creation, overruled at file open. In short, not ONE fixed amount.
- a file that has grown much since the last OPEN. The EOF (for sequential, allocated amount, for other files something similar but more complex) only gets written at file CLOSE. (clean example: batch LOG files during the run always show 0/
- file allocation cache; the system normally has about 10% of free space in pre-allocate cache. Those are not always reported correctly
- (usually the least interesting): an improper dismount does NOT "give back" the previous cache. (But a volume rebuild should cure that)
What is the most important in your case, only you can know.
hth
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:24 AM
03-15-2008 10:24 AM
Re: Drive Space Mystery
Google: dfu openvms download
http://www.digiater.nl/dfu.html
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:32 AM
03-15-2008 10:32 AM
Re: Drive Space Mystery
Google: dfu openvms download
http://www.digiater.nl/dfu.html
Hein.
Your fun at Parties I expect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 10:36 AM
03-15-2008 10:36 AM
Re: Drive Space Mystery
That's "you're". (And he's probably more fun
than I am.)
This new Inter-Web thing sure is neat.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 02:50 PM
03-15-2008 02:50 PM
Re: Drive Space Mystery
You really should get and install DFU, it is a "Swiss army knife" for disk/file operations, and was written as an internal tool at DEC, and has since been released as unsupported software.
Until you install it, you can use directory to find the most likely candidates.
Do you have anything than is monitoring usage of your disks? If so, was there just a gradual increase in allocation or was there some point in time where the rate of usage changed drastically? It appears there are many connections to files on the disk, from "Cluster size 128 Transaction count 4474", so there 4474 "open" files from the node you did the show device/full from. (That doesn't mean 4474 unique files, there may be one file opened 4400 times.) The reason I point this high transaction count out is that if the allocation "extend" size has recently been changed in a effort to reduce disk fragmentation, it could contribute to the "over allocation". (This was one of the things in Jan's list).
Given "Grand total of 628 directories, 53594 files, 53296924/99130624 blocks", the average allocation per file on that disk is 99130624/53594 or 1850 blocks. (The directories are included in the total files, so that information isn't needed for this calculation.) The "used" amount is 53296924/53594 or 994 blocks. So as Hein pointed out, the clustersize of 128 by itself can't explain the over allocation, since "worst case", only 127 blocks per file could be over allocated.
The question is, is the over allocated space spread over a lot of files, or are there a few files with a gargantuan over allocation. This can happen if a file is being written to and never closed or flushed, the EOF will remain at the value that existed when the file was opened. So you can have a file with 0/10000000 that won't be found with the command:
$ directory/select=(size:min:10000)
But will be found with
$ directory/select=(allo:min:10000)
So until you get DFU installed, you may want to do something like:
$ directory/select=(allocated:min:1000000)/size=all/wid=(size:8)
to see if any unexpected files pop out.
Good luck,
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2008 03:06 PM
03-15-2008 03:06 PM
SolutionThe command
$ directory/size=allocation/sel=size:min:1000000/width=(size:8) domaindata01:[000000...]
will display files that have at least 1000000 blocks allocated.
I rarely use that functionality of directory, since I normally use DFU for that.
This command will display files where the differece between allocated and used exceeds 10000 blocks:
$ directory /select=size=unused=10000 /size=all/width=(size:8) domaindata01:[000000...]
Sorry for the incorrect info I posted.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-16-2008 01:40 PM
03-16-2008 01:40 PM
Re: Drive Space Mystery
Apart from timing issues and "free space drift" inherent in the design of a cluster wide file system, even assuming a frozen disk with no file system errors, the two values may differ in either direction.
The directory structure is in some ways optional, it's just a convenient way for humans to catalogue and find files. There is no structural or syntactic reason that a particular file must be entered into a directory, and there are sometimes good reasons that it is not. You can therefore have "dark disk space" consumed that can't be seen from the DIRECTORY command. ANALYZE/DISK will these files as "LOST", but that's not necessarily true - it depends on the files. The presence of such files will manifest as SHOW DEVICE claiming less free space than DIRECTORY, and both may be correct.
In the other direction, a given file may be entered in multiple directories. The simplest example being VMS$COMMON.DIR and all the SYSCOMMON.DIR alias directories, one for each node booting from a common system disk. In this case, the same files will be counted multiple times by DIRECTORY. This means DIRECTORY will report more space used than SHOW DEVICE. Indeed, DIRECTORY may claim more space used than the capacity of the volume (try DIR/SIZE=ALL/GRAND of a VMS installation CD). Again it will be correct. It's even possible to create a loop in a directory structure, which could cause the used space to appear infinite, except for DIRECTORY depth limits.
As others have mentioned, you can use DFU to help clarify the situation. It helps because DFU effectively ignores the directory structure of the disk, just looking at the files.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-17-2008 04:07 AM
03-17-2008 04:07 AM
Re: Drive Space Mystery
Some languages and utilities offer facilities for "temporary files", that are not necessaraly visible on DIR, sometimes eventually without name. Space will be allocated during the lifetime of the image, and once the image is gone, space is returned.
SHOW DEVICE will take these into account (I think SHOW DEVICE reads the device bitmap to determine the free space) where DIRECTORY reads the entries in INDEXF.SYS to return allocated and occupied space. Since these temporary files are not administered in INDEX.SYS, you won't see them in DIR, not will their allocated space be mentioned.
OpenVMS Developer & System Manager
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-17-2008 04:53 AM
03-17-2008 04:53 AM
Re: Drive Space Mystery
WADR, I must correct what I believe to be an inaccuracy in your last posting.
Temporary files ALWAYS have an entry in [000000]INDEXF.SYS. What they frequently lack is an entry in a directory.
SHOW DEVICE works from the actual free space on the volume (I do not have the time to check the sources, or dig through the manual at this instant, but my recollection is that it uses the GETDVI system service or equivalent; the same way that F$GETDVI has the FREEBLOCKS parameter).
DIRECTORY can do a wildcard walk of the directory tree, but that will multiply list files that are in multiple places in such a walk, and will miss those that are not in any directory.
ANALYZE/DISK/LIST will produce a listing of the INDEX file directly.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-17-2008 08:04 AM
03-17-2008 08:04 AM
Re: Drive Space Mystery
Second, files created by applications as temporary files are not entered into a directory. Hence a "DIRECTORY" command doesn't list them.
Third there could be "some" other volume structure problem which could result in inaccurate free/allocated blocks information. ANALYZE/DISK/LOCK shows them.
Fourth always look for the "allocated" blocks when looking for free space information. "DIRECTORY/SIZE=ALLOCATED [000000...]".
Fifth...is there a problem with free space and applications fail with DEVFUL?
/Guenther