Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Output of "show dev d" seems not correct

 
SOLVED
Go to solution
Hoff
Honored Contributor

Re: Output of "show dev d" seems not correct

If the disk is quiescent, then the reported behavior would be atypical, particularly if it persists after both a forced rebuild (which resets the counts) and an analyze rebuild (which purges old caches and marked-for-delete files).

If the disk is active and with application I/O channels open here, then it's entirely possible to have temporary files active within the file system but that are not visible to DIRECTORY. Caches and scratch files and such.

Active OpenVMS system disks are expected to have various active files and cache activity and locked files.
Hein van den Heuvel
Honored Contributor

Re: Output of "show dev d" seems not correct


I suspect you already know which files have trouble, and possibly why.

But if you need some further clarification, then get DFU installed from http://www.digiater.nl/dfu
(Hint : $ DEFINE DFU$NOSMG YES )

With that in place you ask for the big files:

$ DFU SEARC/SIZE=MINI=1000000 $1$DGA100

You will find this will finish in seconds, not minutes compared to the $ DIR/SIZE=ALL/SELE=SIZE=UNUSED=nnnn

An interesting alternative:

$ DFU SEARCH / OVER_ALLOCATED = n

I also like /FRAG=MINI=nnn
And of course the simple REPORT $1$DGA100

The directory tree deletion is nice also, although I find it too verbose, and I find it odd not being able to specify a directory ( [xxx.yyy] ), but only a directory file ( [xxx]yyy.dir )

$ DFU DELE /DIRE/TREE/NOLOG xxx.DIR

fwiw,
Hein

Jon Pinkley
Honored Contributor

Re: Output of "show dev d" seems not correct

Hoff has a valid point about some files that can use space but are not cataloged in any directory, for example spool files. These are more the exception than the rule, but can account for space that doesn't show up in the output of directory. Analyze/disk/repair will put these files into the [syslost] directory.

That's another reason for using DFU, it can see lost files, since it looks directly in the INDEXF.SYS file (where the file headers live), instead of traversing the directories.

See attachment for an example of creating an over allocated file, "losing it", and showing that directory won't find it, but DFU will.

I agree with Hein, DFU is well worth installing. It is much faster than directory, especially on a device that has 3025 directories and 91108 files.

Note, that if you want to use DFU to find files that are consuming space, you need to use the /allocated switch.

Here's an example of the difference. (trimmed, see attachment for complete output)

$ dfu search /size=min:400000/sort sys$sysdevice

DSA1407:[TCPIP$FTP]TCPIP$FTP_RUN.LOG;697 433780/433792

%DFU-S-FND , Files found : 1, Size : 433780/433792

$ dfu search /size=min:400000/allo/sort sys$sysdevice

DSA1407:[TCPIP$FTP]TCPIP$FTP_RUN.LOG;697 433780/433792
DSA1407:[VMS$COMMON.SYSEXE]SYS$QUEUE_MANAGER.QMAN$JOURNAL;1 2/421792

%DFU-S-FND , Files found : 2, Size : 433782/855584

$
it depends
Jon Pinkley
Honored Contributor

Re: Output of "show dev d" seems not correct

As Jan pointed out, truncating a file can't reduce the amount of space occupied by a file to something less than a multiple of the cluster size. In other words, if the clutser size is 16380, and you create a text file with 1 character in it, it will consume 16380 blocks. Truncating it will not reduce the allocated size. Think about it, the bitmap.sys file has a bit for every cluster on the disk, not for every block on the disk, so the allocation "quantum" is one cluster. The EOF is just a record of where the last portion of data is.

Truncation will help if the the difference between used and allocated is .ge. one cluster. However, you can only truncate a file with set file/trucate if the file is not opened by any process (in the cluster). Otherwise you will get a file access conflict error.

This true even if the file is opened for write sharing.

$ copy sys$login:login.com disk$uio:[000000]/allo=100000
$ dir /siz=all disk$uio:[000000]login.com

Directory DISK$UIO:[000000]

LOGIN.COM;245 22/100000

Total of 1 file, 22/100000 blocks.
$ open/app/share=write foo disk$uio:[000000]login.com

From another process..

$ set file/trun disk$uio:[000000]login.com
%SET-E-READERR, error reading DISK$UIO:[000000]LOGIN.COM;245
-SYSTEM-W-ACCONFLICT, file access conflict

Also, analyze disk/repair does not truncate files.

$ close foo ! back to original process that had file open
$ anal/disk/rep disk$uio
Analyze/Disk_Structure/Repair for _DSA3406: started on 6-JUL-2010 16:50:08.87

%ANALDISK-I-SHORTBITMAP, storage bitmap on RVN 1 does not cover the entire device
%ANALDISK-I-OPENQUOTA, error opening QUOTA.SYS
-SYSTEM-W-NOSUCHFILE, no such file
$ dir /siz=all disk$uio:[000000]login.com

Directory DISK$UIO:[000000]

LOGIN.COM;245 22/100000

Total of 1 file, 22/100000 blocks.
$ set file/trun disk$uio:[000000]login.com
$ dir /siz=all disk$uio:[000000]login.com

Directory DISK$UIO:[000000]

LOGIN.COM;245 22/24

Total of 1 file, 22/24 blocks.
$ del disk$uio:[000000]login.com;
$

Jon
it depends
Jon Pinkley
Honored Contributor

Re: Output of "show dev d" seems not correct

@Ricky

XXXX:SYSTEM> dir /size=(alloc,unit=byte)/total
Directory SMSC$ROOT:[LOG]
Total of 58 files, 59.40GB

I am not sure what's the cause.
Can help to suggest me the safest step to try troubleshooting ?

-----------------------------------------

These files are open by some application, and you will need to determine what that is. The easiest way is to use the command:

$ show device/files SMSC$ROOT /out=sys$scratch:smsc.files
$ search sys$scratch:smsc.files ".LOG]"

You will need to do this from every member of the VMS cluster, since each node will only display files that are opened on the current node.

This will display the process name and process id (8 hex digits) that has the file open. If the process name does not help, you can use analyze/system to see more about the process.

Suppose this was the output:

SMSC1 20800411 [SMSC.LOG]ABCD.LOG;32

$ analyze/system
SDA> set process/ind=20800411
SDA> show process/chan ! this will display all the files open by the process and will give a good clue
SDA> exit

Now you need to determine how to get that process to close the file. Then you can copy it somewhere else, and restart the process with an new log file.

Jon

a Google search for smsc$root found this

http://www.scribd.com/SMSC-52MR4-Installation-Manual/d/20342007

which appears to be software from Acision that uses RdB. SMSC is an abbreviation for Short Message Service Centre
it depends
Ricky Pardede
Occasional Advisor

Re: Output of "show dev d" seems not correct

@Jon :
Thanks for the tips to check open files.
Yes, this system is SMSC.

The disk is normal now.
I stop all processes that still lock files, then delete those files.
The disk occupancy back to normal.

I want to ask regarding the locked files :
dir /size=all
XXX00.DUMP;1 0/*******
XXX01.DUMP;1 0/*******
XXX02.DUMP;1 0/*******

It shows 0 of used, but so big in allocated.

Can help to explain how OpenVMS decides block size for used and allocated for 1 process?



Joseph Huber_1
Honored Contributor

Re: Output of "show dev d" seems not correct

An open file gets no update on size,date,etc. until it is closed.
So if an application just opens a file once, and only closes when the program terminates, then it shows 0 blocks used, and a growing number of blocks allocated.

Some utilities do close/reopen log files on a regular interval, so one can see the used block s in a directory listing.
http://www.mpp.mpg.de/~huber
Highlighted
Jon Pinkley
Honored Contributor

Re: Output of "show dev d" seems not correct

@Ricky
I want to ask regarding the locked files :
dir /size=all
XXX00.DUMP;1 0/*******

It shows 0 of used, but so big in allocated.

Can help to explain how OpenVMS decides block size for used and allocated for 1 process?

-----------------

Joseph Huber explained why the used size is displayed as zero. Unless a process explicitly request that the EOF (End Of File) pointer is updated, it will remain static. It gets updated when the file metadata gets resynched by the programs request; either an RMS $FLUSH or when the file is closed. If the file is created when it is opened, then written to for a long period of time, then the file will continue to be extend the file's allocated space to make room for the data. The end of data pointer is constantly updated in the memory of the process writing the data, but unless the metadata is resynched, the updated value will not be visible to other processes, and what is displayed by directory/size=used will remain 0.

In the directory output above, the files were larger than 5 GB (9,999,999). The default width for the size field is 7, so the allocated blocks were displayed as ********. To see the actual value, you can user directory/full file or directory/width=(size=12)/size=all (see help dir/width)

The used/allocated concept isn't unique to VMS, windows does the same thing, although is a bit harder to see the "allocated" space in windows. The allocated space in windows shows up in file properties as "Size on Disk".

Size: 706 bytes (706 bytes)
Size on Disk: 4.00 KB (4,096 bytes)

------------------

There is probably a cleaner way to get the processes to close and open a new .dump file than to stop the process, but I know nothing about the SMSC software. You will need to consult the documentation for that.

Jon
it depends