- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Disk Cluster Size too Large
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:07 AM
тАО04-21-2006 07:07 AM
I just set up several 6 member raid sets using 146gb disks on an hsg80. I accepted the defaults when i initialized these but didn't realize the cluster size was set to 1372 - which seems excessive.
I assume there is no way to change this without re-initializing the disk (an impossible task now) but is there a reason the default is so high on the large raid arrays.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:36 AM
тАО04-21-2006 07:36 AM
Solutionby default, the BITMAP.SYS file is limited to 255 blocks. That gives you:
255 blocks * 512 bytes/block * 8 bits/byte = 1044480 bits to maintain the block/cluster allocation.
If a disk drive has more than 1044480 blocks, you need to 'cluster' multiple blocks to maintain the free/used bitmap.
A single '146GB' disk has 286749488 blocks. If you use a 6-member RAID(-5)set, the unit size should be 5*286749488 blocks = 1433747440 blocks, because the equivalent of 1 disk is used for the parity data.
1433747440 blocks / 1044480 bits = 1372.69
Recent versions of OpenVMS support a larger BITMAP.SYS which allows you to select a smaller cluster size. If I remember correctly, you can now use a cluster size of 1 on disks with up to 173 GigaByte size - whether that makes sense is certainly debatable ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:38 AM
тАО04-21-2006 07:38 AM
Re: Disk Cluster Size too Large
yeah..
That algoritm was devised LONG before even the CONCEPT of Gigabyte disks was thought of.
Default is half of max, which is the nimber of chuncks that could be addressed in 32 bits (well, 31 + sign bit) way back when.
Only 7.2 enabled more, and thereby smaller, clusters.
But of course, being VMS, the EXISTING algoritm for calculating the default stayed the same.
So historical reasons.
hth
Proost.
Have one on me (maybe in May in Nashua?)
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:38 AM
тАО04-21-2006 07:38 AM
Re: Disk Cluster Size too Large
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:47 AM
тАО04-21-2006 07:47 AM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 08:24 AM
тАО04-21-2006 08:24 AM
Re: Disk Cluster Size too Large
It's a truly non-trivial task, because it would have to move data around if the new size is not a multiple of the existing cluster size and it must update the retrieval pointers in many many file headers. You would also end up with a lot of scattered free space, because any unused blocks at the end of the last cluster of a file would have become available.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 09:31 AM
тАО04-21-2006 09:31 AM
Re: Disk Cluster Size too Large
Actually, I have salvaged such a situation a while ago. It is possible, but it is a delicate operation.
In reviewing this thread, I do not see whether this is an ODS-2 or ODS-5 volume. It does make a difference.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 06:06 PM
тАО04-21-2006 06:06 PM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2006 09:50 AM
тАО04-22-2006 09:50 AM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2006 10:39 AM
тАО04-22-2006 10:39 AM
Re: Disk Cluster Size too Large
It can can become up to 256 times larger than before.
You would have to pick a value which divides nicely into 1372, but fortunately there are many of those as 1372 = 2*2*7*7*7
So you could pick 7, but I would pick a larger mutliple.
You coudl then mark the whole 'old clusters' as alloced, mount the disk and truncate all file returning the end-of-file roundups to the now fragemented free block pool.
Yikes!
See:
http://h71000.www7.hp.com/doc/82FINAL/9996/9996pro_130.html#blue_117
...
min cluster = (disk size in number of blocks)/(65535 * 4096)
First and foremost you need to ask yourself whether this large clustersize is a problem.
If the disk is supposed to hold tens of thousands of small files, then it could be a serious problem.
But if it is supposed to hold just a few thousand large files, then maybe there is no real serious problem. It might just only look ugly.
As a workaround I suppose you could also consider using the LDdriver to carve the main disk into smaller virtual disks and copy the smaller files there, leaving the large ones on the original disk. Be sure to check for the 'right' copy tool to copy only up to EOF and/or use truncate after each file copied.
This step could also be useful in preparing for a re-init.
Can you 'borrow' 6 more 146 GB drives from a friendly HP sales rep and move the data giving back the original (now stess tested :-) drives.
Finally... what is your backup / restore policy for the large disk? Have you tried and timed a restore? Maybe you can convince management that you should run a controlled practice for a full restore... to a re-initted disk but you don't have to mention that detail.
Good luck,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 12:27 AM
тАО04-23-2006 12:27 AM
Re: Disk Cluster Size too Large
In essence, Hein is correct. A while ago, a systems programmer at a site with which I was connected throught that the correct allocation minimum would be half a disk track (in those days approximately ten sectors).
When disk space ran low, an analyis showed that our actual file allocations were resulting in a breakage factor of approximately 30% (30% of our disk space was unusable because of the "last sector vs. last allocated sector" effect (a one block file used only a single block but allocated an entire cluster).
A secondary problem was backup capabilities (or lack thereof). Pre-BACKUP, there was no good tool to restoring a disk with a different cluster factor.
The option of simply purchasing an additional disk drive was out of the question. Then current prices meant that that option was, in relative terms, the equivalent of a good portion of a man-year, and totally out of the question.
Researching the problem, I was able to determine (for an ODS-2 disk) the necessary surgery was feasible. Ensuring that the data was backed up, I dismounted the disk, and sucessfully made the changes. (I was also not a cowboy about it, I did take the precaution of checking with one of my Engineering contacts, Hein can take a good guess as to whom, but I will not identify). My contact noted that there was an even shorter way to accomplish what I was doing.
Was it the computer equivalent of neurosurgery, probably. But neurosurgery is reasonably safe, when done with the correct preparation, testing, and care.
Would I (and could I) do it again in the same or a similar situation, yes. Do I recommend it as a general procedure, NO. The actual downtime to do this change is measured in seconds, and does not require a reboot of the system or cluster. Properly prepared, the operation is safe.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 10:32 AM
тАО04-23-2006 10:32 AM
Re: Disk Cluster Size too Large
I unmount the raid array and then set the cluster size without reinitializing it - correct?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 12:40 PM
тАО04-23-2006 12:40 PM
Re: Disk Cluster Size too Large
It is not that simple.
I said that it can be accomplished in that time scale, but while I would do it in appropriate situations, it is a delicate operation, not just a question of issuing one or two easy commands.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-23-2006 12:45 PM
тАО04-23-2006 12:45 PM
Re: Disk Cluster Size too Large
I hit "Submit" too quickly.
With the proper preparation, the actual switch of the cluster factor is a fairly fast operation (the switch would in today's technologies, would be very short).
However, I want to be clear (if for nothing else, the future readers of this discussion). While the operation can be done quickly, it is not merely a question of issuing a DISMOUNT, one or two standard commands, and then re-MOUNTing the volumes.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 01:53 AM
тАО04-24-2006 01:53 AM
Re: Disk Cluster Size too Large
Next question, when I backup the data back to these re-init'd disks with the new cluster size, is there going to be a problem because of the difference in cluster size?
And, what exactly is the best cluster size I should re-init them to?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 01:58 AM
тАО04-24-2006 01:58 AM
Re: Disk Cluster Size too Large
be sure to make the backup with BACKUP/IMAGE... and the restore with BACKUP/IMAGE/NOINIT..., else the restore would reinit the disk with its original settings.
The clustersize should be selecting with respect to the size of the files being stored on the disk. If you have a lot of small files, setting a big clustersize would waste a lot of space, while big files allow for bigger clustersizes.
If the 'typical' file fits in one cluster, it will always be contigous.
regards Kalle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 02:03 AM
тАО04-24-2006 02:03 AM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 02:17 AM
тАО04-24-2006 02:17 AM
Re: Disk Cluster Size too Large
(smile) You have seen too many wildcards! (Smile) 2*2*7*7*7 is also expressible as:
(2**2)*(7**3)
In other words, the "*" are arithmetic operators, not wildcards. So, for example, cluster factors of 2, 4, 7, 14,... would all be valid if you were restructuring the volume.
If you are re-initializing the volumes, this is not a concern, any cluster factor is acceptable (although depending precisely on what disks and controllers you have, some may be better than others).
If the copying is a roadblock, as I said, the conversion can be done, albeit with care. If you wish, I will be happy to speak offline with you about it.
I hope that the preceeding is helpful.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 02:42 AM
тАО04-24-2006 02:42 AM
Re: Disk Cluster Size too Large
What's this called in english?
- prime factorization ?
- prime decomposition ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 03:00 AM
тАО04-24-2006 03:00 AM
Re: Disk Cluster Size too Large
In English, Prime Factorization.
Not that we notice it often, but there is an ambiguity in notation (the WildCard Pattern Matching syntax vs. Mathematics).
While I have not often fallen into that trap, overloaded syntax is always a potential source of mis-understandings.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 04:24 AM
тАО04-24-2006 04:24 AM
Re: Disk Cluster Size too Large
You can use the BACKUP /TRUNCATE switch in to truncate files at [EOF] when restoring to your newly initialized disk. The default behavior of BACKUP will be to restore the file with the previous allocation.
Andy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 12:51 PM
тАО04-24-2006 12:51 PM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 03:37 PM
тАО04-24-2006 03:37 PM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 10:08 PM
тАО04-24-2006 10:08 PM
Re: Disk Cluster Size too Large
please check your "new" clustersize.
If it is back to 1372 (which I suspect) then you probably did NOT add /NOINIT to your BACKUP, and that undid YOUR init by re-initialysing the disk with the setting out of the saveset..
Also, please re-read Hein's last question.
Do you have a COMPELLING reason for so large a RAIDset? And by compelling I mean: a single file too big (or expected to grow as much) to fit onto a single drive? In which case, a big clustersize actually would be an advantage.
But for your many small files, there is NO reason to put them on a big device (or RAIDset), but many reasons to use smaller devices. (not the least of those: manageable BACKUP savesets and rstore procedures)
Should you have an applic that mixes both, then it is REALLY worth the effort of implementing clever choices of Logical Names, maybe by using search lists.
Should you need help on that, then please supply some details.
hth
Proost.
Have one on me (maybe in May in Nashua?)
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2006 11:38 PM
тАО04-24-2006 11:38 PM
Re: Disk Cluster Size too Large
The only reason we have for the very large raid sets is the footprint we have for the cab. with the need for a LOT of small files.