Operating System - OpenVMS
1752589 Members
3917 Online
108788 Solutions
New Discussion юеВ

Re: Disk Cluster Size too Large

 
SOLVED
Go to solution
Karen Lee_3
Frequent Advisor

Disk Cluster Size too Large

OpenVMS 7.3-1

I just set up several 6 member raid sets using 146gb disks on an hsg80. I accepted the defaults when i initialized these but didn't realize the cluster size was set to 1372 - which seems excessive.

I assume there is no way to change this without re-initializing the disk (an impossible task now) but is there a reason the default is so high on the large raid arrays.
25 REPLIES 25
Uwe Zessin
Honored Contributor
Solution

Re: Disk Cluster Size too Large

I'd say it uses the old allocation scheme for backward-compatibility. Why cluster-size = 1372 is easy to explain:

by default, the BITMAP.SYS file is limited to 255 blocks. That gives you:
255 blocks * 512 bytes/block * 8 bits/byte = 1044480 bits to maintain the block/cluster allocation.

If a disk drive has more than 1044480 blocks, you need to 'cluster' multiple blocks to maintain the free/used bitmap.


A single '146GB' disk has 286749488 blocks. If you use a 6-member RAID(-5)set, the unit size should be 5*286749488 blocks = 1433747440 blocks, because the equivalent of 1 disk is used for the parity data.

1433747440 blocks / 1044480 bits = 1372.69


Recent versions of OpenVMS support a larger BITMAP.SYS which allows you to select a smaller cluster size. If I remember correctly, you can now use a cluster size of 1 on disks with up to 173 GigaByte size - whether that makes sense is certainly debatable ;-)
.
Jan van den Ende
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

yeah..
That algoritm was devised LONG before even the CONCEPT of Gigabyte disks was thought of.
Default is half of max, which is the nimber of chuncks that could be addressed in 32 bits (well, 31 + sign bit) way back when.
Only 7.2 enabled more, and thereby smaller, clusters.
But of course, being VMS, the EXISTING algoritm for calculating the default stayed the same.
So historical reasons.

hth

Proost.

Have one on me (maybe in May in Nashua?)

jpe

Don't rust yours pelled jacker to fine doll missed aches.
Ian Miller.
Honored Contributor

Re: Disk Cluster Size too Large

on earlier versions of VMS there where limits on the storage bitmap size which lead to a minium cluster size.
____________________
Purely Personal Opinion
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

So, I guess there is no way to recover all this 'allocated' space then right?
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

I don't think so - at least I have never heard about a utility that allows the in-place resizing of the cluster size.

It's a truly non-trivial task, because it would have to move data around if the new size is not a multiple of the existing cluster size and it must update the retrieval pointers in many many file headers. You would also end up with a lot of scattered free space, because any unused blocks at the end of the last cluster of a file would have become available.
.
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

Actually, I have salvaged such a situation a while ago. It is possible, but it is a delicate operation.

In reviewing this thread, I do not see whether this is an ODS-2 or ODS-5 volume. It does make a difference.

- Bob Gezelter, http://www.rlgsc.com
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

I don't think it makes a difference, because extended BITMAP sizes are supported on both ODS levels.
.
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

What is this 'delicate operation' to restore allocated space?
Hein van den Heuvel
Honored Contributor

Re: Disk Cluster Size too Large

I suppose Bob may be refering to a (probably still to be written) offline tool to replace the bitmap.sys file, and the structures describing it with a larger one with more bits.

It can can become up to 256 times larger than before.
You would have to pick a value which divides nicely into 1372, but fortunately there are many of those as 1372 = 2*2*7*7*7
So you could pick 7, but I would pick a larger mutliple.
You coudl then mark the whole 'old clusters' as alloced, mount the disk and truncate all file returning the end-of-file roundups to the now fragemented free block pool.

Yikes!

See:
http://h71000.www7.hp.com/doc/82FINAL/9996/9996pro_130.html#blue_117
...
min cluster = (disk size in number of blocks)/(65535 * 4096)


First and foremost you need to ask yourself whether this large clustersize is a problem.
If the disk is supposed to hold tens of thousands of small files, then it could be a serious problem.
But if it is supposed to hold just a few thousand large files, then maybe there is no real serious problem. It might just only look ugly.

As a workaround I suppose you could also consider using the LDdriver to carve the main disk into smaller virtual disks and copy the smaller files there, leaving the large ones on the original disk. Be sure to check for the 'right' copy tool to copy only up to EOF and/or use truncate after each file copied.
This step could also be useful in preparing for a re-init.

Can you 'borrow' 6 more 146 GB drives from a friendly HP sales rep and move the data giving back the original (now stess tested :-) drives.

Finally... what is your backup / restore policy for the large disk? Have you tried and timed a restore? Maybe you can convince management that you should run a controlled practice for a full restore... to a re-initted disk but you don't have to mention that detail.


Good luck,
Hein.