Operating System - OpenVMS
1824993 Members
2147 Online
109678 Solutions
New Discussion юеВ

Re: Disk Cluster Size too Large

 
SOLVED
Go to solution
Karen Lee_3
Frequent Advisor

Disk Cluster Size too Large

OpenVMS 7.3-1

I just set up several 6 member raid sets using 146gb disks on an hsg80. I accepted the defaults when i initialized these but didn't realize the cluster size was set to 1372 - which seems excessive.

I assume there is no way to change this without re-initializing the disk (an impossible task now) but is there a reason the default is so high on the large raid arrays.
25 REPLIES 25
Uwe Zessin
Honored Contributor
Solution

Re: Disk Cluster Size too Large

I'd say it uses the old allocation scheme for backward-compatibility. Why cluster-size = 1372 is easy to explain:

by default, the BITMAP.SYS file is limited to 255 blocks. That gives you:
255 blocks * 512 bytes/block * 8 bits/byte = 1044480 bits to maintain the block/cluster allocation.

If a disk drive has more than 1044480 blocks, you need to 'cluster' multiple blocks to maintain the free/used bitmap.


A single '146GB' disk has 286749488 blocks. If you use a 6-member RAID(-5)set, the unit size should be 5*286749488 blocks = 1433747440 blocks, because the equivalent of 1 disk is used for the parity data.

1433747440 blocks / 1044480 bits = 1372.69


Recent versions of OpenVMS support a larger BITMAP.SYS which allows you to select a smaller cluster size. If I remember correctly, you can now use a cluster size of 1 on disks with up to 173 GigaByte size - whether that makes sense is certainly debatable ;-)
.
Jan van den Ende
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

yeah..
That algoritm was devised LONG before even the CONCEPT of Gigabyte disks was thought of.
Default is half of max, which is the nimber of chuncks that could be addressed in 32 bits (well, 31 + sign bit) way back when.
Only 7.2 enabled more, and thereby smaller, clusters.
But of course, being VMS, the EXISTING algoritm for calculating the default stayed the same.
So historical reasons.

hth

Proost.

Have one on me (maybe in May in Nashua?)

jpe

Don't rust yours pelled jacker to fine doll missed aches.
Ian Miller.
Honored Contributor

Re: Disk Cluster Size too Large

on earlier versions of VMS there where limits on the storage bitmap size which lead to a minium cluster size.
____________________
Purely Personal Opinion
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

So, I guess there is no way to recover all this 'allocated' space then right?
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

I don't think so - at least I have never heard about a utility that allows the in-place resizing of the cluster size.

It's a truly non-trivial task, because it would have to move data around if the new size is not a multiple of the existing cluster size and it must update the retrieval pointers in many many file headers. You would also end up with a lot of scattered free space, because any unused blocks at the end of the last cluster of a file would have become available.
.
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

Actually, I have salvaged such a situation a while ago. It is possible, but it is a delicate operation.

In reviewing this thread, I do not see whether this is an ODS-2 or ODS-5 volume. It does make a difference.

- Bob Gezelter, http://www.rlgsc.com
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

I don't think it makes a difference, because extended BITMAP sizes are supported on both ODS levels.
.
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

What is this 'delicate operation' to restore allocated space?
Hein van den Heuvel
Honored Contributor

Re: Disk Cluster Size too Large

I suppose Bob may be refering to a (probably still to be written) offline tool to replace the bitmap.sys file, and the structures describing it with a larger one with more bits.

It can can become up to 256 times larger than before.
You would have to pick a value which divides nicely into 1372, but fortunately there are many of those as 1372 = 2*2*7*7*7
So you could pick 7, but I would pick a larger mutliple.
You coudl then mark the whole 'old clusters' as alloced, mount the disk and truncate all file returning the end-of-file roundups to the now fragemented free block pool.

Yikes!

See:
http://h71000.www7.hp.com/doc/82FINAL/9996/9996pro_130.html#blue_117
...
min cluster = (disk size in number of blocks)/(65535 * 4096)


First and foremost you need to ask yourself whether this large clustersize is a problem.
If the disk is supposed to hold tens of thousands of small files, then it could be a serious problem.
But if it is supposed to hold just a few thousand large files, then maybe there is no real serious problem. It might just only look ugly.

As a workaround I suppose you could also consider using the LDdriver to carve the main disk into smaller virtual disks and copy the smaller files there, leaving the large ones on the original disk. Be sure to check for the 'right' copy tool to copy only up to EOF and/or use truncate after each file copied.
This step could also be useful in preparing for a re-init.

Can you 'borrow' 6 more 146 GB drives from a friendly HP sales rep and move the data giving back the original (now stess tested :-) drives.

Finally... what is your backup / restore policy for the large disk? Have you tried and timed a restore? Maybe you can convince management that you should run a controlled practice for a full restore... to a re-initted disk but you don't have to mention that detail.


Good luck,
Hein.

Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

In essence, Hein is correct. A while ago, a systems programmer at a site with which I was connected throught that the correct allocation minimum would be half a disk track (in those days approximately ten sectors).

When disk space ran low, an analyis showed that our actual file allocations were resulting in a breakage factor of approximately 30% (30% of our disk space was unusable because of the "last sector vs. last allocated sector" effect (a one block file used only a single block but allocated an entire cluster).

A secondary problem was backup capabilities (or lack thereof). Pre-BACKUP, there was no good tool to restoring a disk with a different cluster factor.

The option of simply purchasing an additional disk drive was out of the question. Then current prices meant that that option was, in relative terms, the equivalent of a good portion of a man-year, and totally out of the question.

Researching the problem, I was able to determine (for an ODS-2 disk) the necessary surgery was feasible. Ensuring that the data was backed up, I dismounted the disk, and sucessfully made the changes. (I was also not a cowboy about it, I did take the precaution of checking with one of my Engineering contacts, Hein can take a good guess as to whom, but I will not identify). My contact noted that there was an even shorter way to accomplish what I was doing.

Was it the computer equivalent of neurosurgery, probably. But neurosurgery is reasonably safe, when done with the correct preparation, testing, and care.

Would I (and could I) do it again in the same or a similar situation, yes. Do I recommend it as a general procedure, NO. The actual downtime to do this change is measured in seconds, and does not require a reboot of the system or cluster. Properly prepared, the operation is safe.

- Bob Gezelter, http://www.rlgsc.com
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

You guys are why ahead of me - let me see if i understand this.

I unmount the raid array and then set the cluster size without reinitializing it - correct?
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

It is not that simple.

I said that it can be accomplished in that time scale, but while I would do it in appropriate situations, it is a delicate operation, not just a question of issuing one or two easy commands.

- Bob Gezelter, http://www.rlgsc.com
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

I hit "Submit" too quickly.

With the proper preparation, the actual switch of the cluster factor is a fairly fast operation (the switch would in today's technologies, would be very short).

However, I want to be clear (if for nothing else, the future readers of this discussion). While the operation can be done quickly, it is not merely a question of issuing a DISMOUNT, one or two standard commands, and then re-MOUNTing the volumes.

- Bob Gezelter, http://www.rlgsc.com
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

ok, looks like i'll have to copy all the data over to other disks and re-init these with a smaller cluster then copy the stuff back. it will take forever, but it's really using a lot of space like this.

Next question, when I backup the data back to these re-init'd disks with the new cluster size, is there going to be a problem because of the difference in cluster size?

And, what exactly is the best cluster size I should re-init them to?
Karl Rohwedder
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

be sure to make the backup with BACKUP/IMAGE... and the restore with BACKUP/IMAGE/NOINIT..., else the restore would reinit the disk with its original settings.
The clustersize should be selecting with respect to the size of the files being stored on the disk. If you have a lot of small files, setting a big clustersize would waste a lot of space, while big files allow for bigger clustersizes.
If the 'typical' file fits in one cluster, it will always be contigous.

regards Kalle
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

These disk have thousands of small files. I know someone said any number 2*7*7*7 - but exactly what does that mean? "27", "277"?
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

(smile) You have seen too many wildcards! (Smile) 2*2*7*7*7 is also expressible as:

(2**2)*(7**3)

In other words, the "*" are arithmetic operators, not wildcards. So, for example, cluster factors of 2, 4, 7, 14,... would all be valid if you were restructuring the volume.

If you are re-initializing the volumes, this is not a concern, any cluster factor is acceptable (although depending precisely on what disks and controllers you have, some may be better than others).

If the copying is a roadblock, as I said, the conversion can be done, albeit with care. If you wish, I will be happy to speak offline with you about it.

I hope that the preceeding is helpful.

- Bob Gezelter, http://www.rlgsc.com
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

> 1372 = 2*2*7*7*7

What's this called in english?
- prime factorization ?
- prime decomposition ?
.
Robert Gezelter
Honored Contributor

Re: Disk Cluster Size too Large

Uwe,

In English, Prime Factorization.

Not that we notice it often, but there is an ambiguity in notation (the WildCard Pattern Matching syntax vs. Mathematics).

While I have not often fallen into that trap, overloaded syntax is always a potential source of mis-understandings.

- Bob Gezelter, http://www.rlgsc.com
Andy Bustamante
Honored Contributor

Re: Disk Cluster Size too Large


You can use the BACKUP /TRUNCATE switch in to truncate files at [EOF] when restoring to your newly initialized disk. The default behavior of BACKUP will be to restore the file with the previous allocation.

Andy
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

the backup/truncate switch doesn't seem to have any affect - the files are still maintaining the 1372 allocation.
Uwe Zessin
Honored Contributor

Re: Disk Cluster Size too Large

Did you use BACKUP/NOINITIALIZE/TRUNCATE to restore to a disk that was INITIALIZEd with a smaller cluster factor?
.
Jan van den Ende
Honored Contributor

Re: Disk Cluster Size too Large

Karen,

please check your "new" clustersize.
If it is back to 1372 (which I suspect) then you probably did NOT add /NOINIT to your BACKUP, and that undid YOUR init by re-initialysing the disk with the setting out of the saveset..

Also, please re-read Hein's last question.
Do you have a COMPELLING reason for so large a RAIDset? And by compelling I mean: a single file too big (or expected to grow as much) to fit onto a single drive? In which case, a big clustersize actually would be an advantage.
But for your many small files, there is NO reason to put them on a big device (or RAIDset), but many reasons to use smaller devices. (not the least of those: manageable BACKUP savesets and rstore procedures)
Should you have an applic that mixes both, then it is REALLY worth the effort of implementing clever choices of Logical Names, maybe by using search lists.
Should you need help on that, then please supply some details.

hth

Proost.

Have one on me (maybe in May in Nashua?)

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Karen Lee_3
Frequent Advisor

Re: Disk Cluster Size too Large

i redid the init and then backkup/truncate command on a smaller raid array and it worked fine - must have been late....

The only reason we have for the very large raid sets is the footprint we have for the cab. with the need for a LOT of small files.