Operating System - OpenVMS
1748129 Members
3489 Online
108758 Solutions
New Discussion юеВ

Re: Dealing with a badly fragmented disk

 
SOLVED
Go to solution
John A.  Beard
Regular Advisor

Re: Dealing with a badly fragmented disk

Hi Robert,

Maybe you could clear up one thing in relation to not having to backup/init/restore the disk. I may be off the beaten track altogher with this, but I know from a few tests I did that I could not copy /contig a 2 gig file back onto this disk, and had to revert to allowing it be created whereever the system deemed fit. How can I make the free space contiguous if it is already fragmented (or does it matter in the slightest?). The 100 odd gig of files that created on this disk are deleted after the network backup, amd shortly thereafter. a replacement set is created. Are not all of these new files going to have the same contigous space allocation problems if nothing if things remain unchecked.
Glacann fear cr├нonna comhairle.
Robert Gezelter
Honored Contributor

Re: Dealing with a badly fragmented disk

John,

Working without visibility is a challenge. However, moving these files will often free up space.

In the case of backup files, the they are effectively never in use, so the fragmentation is not necessarily a performance issue.

What I am recommending is exhausting the possibilities BEFORE going through the step of image backup/restore, as that will not inherantly solve more than the instant problem.

- Bob Gezelter, http://www.rlgsc.com
John Gillings
Honored Contributor

Re: Dealing with a badly fragmented disk

John,

If you do resort to rebuilding the disk...

(and I'd STRONGLY recommend instead of BACKUP/INIT/RESTORE via tape to the same physical drive that instead, you get a new drive,

$ BACKUP/IMAGE

then swap the physical drives. That way you always have an immediate recovery path, you're never at risk of losing your data to a failed tape or tape drive, and it's MUCH faster)

anyway, back to my original point. If this disk contains very large files, consider setting the cluster size to a very large value when you reinit (and remember to BACKUP/IMAGE/NOINIT to preserve the new cluster size). The larger the cluster size, the less fragmentation is possible, and the less it matters. Yes, you can "waste" up to a whole cluster at the end of every file, but if you have a small number of large files, that's a negligible overhead.

Remember that a file header can hold between 55 and 75 extents, so choose a cluster size that is at least 1/50th of the size of your average file (so, for example, if the files are around 1GB, perhaps choose a cluster size of 65536 blocks). That way it's impossible for those files to overflow their headers.

If the disk is shared with small files, consider segregating your data according to size. With appropriate use of search lists, this can often be made transparent to the application.
A crucible of informative mistakes
Jon Pinkley
Honored Contributor

Re: Dealing with a badly fragmented disk

Re: "if the files are around 1GB, perhaps choose a cluster size of 65536 blocks)."

If the help is correct, that isn't possible.

V7.3-2

INITIALIZE

/CLUSTER_SIZE

/CLUSTER_SIZE=number-of-blocks

Defines, for disk volumes, the minimum allocation unit in blocks.
The maximum size you can specify for a volume is 16382 blocks, or
1/50th the volume size, whichever is smaller.

V8.3

INITIALIZE

/CLUSTER_SIZE

/CLUSTER_SIZE=number-of-blocks

Defines, for disk volumes, the minimum allocation unit in blocks.
The maximum size you can specify for a volume is 16380 blocks, or
1/50th the volume size, whichever is smaller.

----

Which value is correct 16380 or 16382, or if either of these is correct, I am not sure.

The HM2$W_CLUSTER is a 16 bit field, and since 0 isn't a valid value, it could have been interpreted as 65536. However, according to "VMS File System Internals", (Kirby McCoy 1990), HM2$W_CLUSTER must have a non-zero value for the homeblock to be considered valid. So that limits the upper bound to (2^16)-1 (65535). I am not sure why the value is limited to (2^14)-4 (16380 according to V8.3 help).
it depends
Marty Kuhrt
Occasional Advisor

Re: Dealing with a badly fragmented disk

A few things to consider about fragmentation.

If your file has more than 70ish fragments it creates a new file header to contain them. IIRC it was 72 for the first header, and 76 after that, but it's been a while since I've dealt with this. (I did Diskeeper support for a long time.)

Even if you consolidate the fragments for a specific header, you'll need to do a header consolidation to fix up all the fragments in the file. Once you get a specific header down to a couple of fragments you can then start consolidating headers. If you have, say, ten headers, with one fragment a piece, it will still look like a file with ten frags.

I haven't looked at DFU for a while. If it has a header consolidation function you may want to use that _after_ you do the defrag of the specific headers.

Can't keep it up? You need VMS!
Jon Pinkley
Honored Contributor

Re: Dealing with a badly fragmented disk

Just a follow up on my reply from Jan 31, 2008 21:56:32 GMT.

I tested what the largest clustersize I could use to initialize a 50 GB vdisk. On both 7.3-2 and 8.3 it was 16380, so the help from 8.3 is correct.

Jon
it depends
Jon Pinkley
Honored Contributor

Re: Dealing with a badly fragmented disk

Here's the reason for the max cluster size being < 2^14:

The home block has a 16 bit field (HM2$W_IBMAPVBN) that contains the VBN of the first block of the index file bitmap in INDEXF.SYS. This VBN will always be clusterfactor*4+1, therefore the maximum clusterfactor must be .le. 16383 (16383*4)+1=65533.
it depends