- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Disk Cluster Size too Large
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:07 AM
тАО04-21-2006 07:07 AM
I just set up several 6 member raid sets using 146gb disks on an hsg80. I accepted the defaults when i initialized these but didn't realize the cluster size was set to 1372 - which seems excessive.
I assume there is no way to change this without re-initializing the disk (an impossible task now) but is there a reason the default is so high on the large raid arrays.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:36 AM
тАО04-21-2006 07:36 AM
Solutionby default, the BITMAP.SYS file is limited to 255 blocks. That gives you:
255 blocks * 512 bytes/block * 8 bits/byte = 1044480 bits to maintain the block/cluster allocation.
If a disk drive has more than 1044480 blocks, you need to 'cluster' multiple blocks to maintain the free/used bitmap.
A single '146GB' disk has 286749488 blocks. If you use a 6-member RAID(-5)set, the unit size should be 5*286749488 blocks = 1433747440 blocks, because the equivalent of 1 disk is used for the parity data.
1433747440 blocks / 1044480 bits = 1372.69
Recent versions of OpenVMS support a larger BITMAP.SYS which allows you to select a smaller cluster size. If I remember correctly, you can now use a cluster size of 1 on disks with up to 173 GigaByte size - whether that makes sense is certainly debatable ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:38 AM
тАО04-21-2006 07:38 AM
Re: Disk Cluster Size too Large
yeah..
That algoritm was devised LONG before even the CONCEPT of Gigabyte disks was thought of.
Default is half of max, which is the nimber of chuncks that could be addressed in 32 bits (well, 31 + sign bit) way back when.
Only 7.2 enabled more, and thereby smaller, clusters.
But of course, being VMS, the EXISTING algoritm for calculating the default stayed the same.
So historical reasons.
hth
Proost.
Have one on me (maybe in May in Nashua?)
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:38 AM
тАО04-21-2006 07:38 AM
Re: Disk Cluster Size too Large
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 07:47 AM
тАО04-21-2006 07:47 AM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 08:24 AM
тАО04-21-2006 08:24 AM
Re: Disk Cluster Size too Large
It's a truly non-trivial task, because it would have to move data around if the new size is not a multiple of the existing cluster size and it must update the retrieval pointers in many many file headers. You would also end up with a lot of scattered free space, because any unused blocks at the end of the last cluster of a file would have become available.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 09:31 AM
тАО04-21-2006 09:31 AM
Re: Disk Cluster Size too Large
Actually, I have salvaged such a situation a while ago. It is possible, but it is a delicate operation.
In reviewing this thread, I do not see whether this is an ODS-2 or ODS-5 volume. It does make a difference.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-21-2006 06:06 PM
тАО04-21-2006 06:06 PM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2006 09:50 AM
тАО04-22-2006 09:50 AM
Re: Disk Cluster Size too Large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-22-2006 10:39 AM
тАО04-22-2006 10:39 AM
Re: Disk Cluster Size too Large
It can can become up to 256 times larger than before.
You would have to pick a value which divides nicely into 1372, but fortunately there are many of those as 1372 = 2*2*7*7*7
So you could pick 7, but I would pick a larger mutliple.
You coudl then mark the whole 'old clusters' as alloced, mount the disk and truncate all file returning the end-of-file roundups to the now fragemented free block pool.
Yikes!
See:
http://h71000.www7.hp.com/doc/82FINAL/9996/9996pro_130.html#blue_117
...
min cluster = (disk size in number of blocks)/(65535 * 4096)
First and foremost you need to ask yourself whether this large clustersize is a problem.
If the disk is supposed to hold tens of thousands of small files, then it could be a serious problem.
But if it is supposed to hold just a few thousand large files, then maybe there is no real serious problem. It might just only look ugly.
As a workaround I suppose you could also consider using the LDdriver to carve the main disk into smaller virtual disks and copy the smaller files there, leaving the large ones on the original disk. Be sure to check for the 'right' copy tool to copy only up to EOF and/or use truncate after each file copied.
This step could also be useful in preparing for a re-init.
Can you 'borrow' 6 more 146 GB drives from a friendly HP sales rep and move the data giving back the original (now stess tested :-) drives.
Finally... what is your backup / restore policy for the large disk? Have you tried and timed a restore? Maybe you can convince management that you should run a controlled practice for a full restore... to a re-initted disk but you don't have to mention that detail.
Good luck,
Hein.