- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Maximum clustersize for disks
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 12:42 AM
тАО02-13-2006 12:42 AM
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=977095
Conclusion was that you could use up to 1/100 of the total disk size. I tried INIT/CLUS=16384 and INIT did not complain. Neither did the subsequent MOUNT.
But... As a result of a crash of one of the cluster nodes, the disk needed a rebuild. The rebuild code does not like clustersizes which are big and this resulted in a tight loop in Kernel mode, which took away one of the four cpu's.
STOP/EXIT=KERNEL/ID=... stopped this process.
This all is/was on V7.3-2. In V8.2 INIT is a bit more picky:
$ INIT/CLUS=16384 gives %INIT-F-BADCLUSTER
$ INIT/CLUS=16382 gives %INIT-F-CLUSTER, unsuitable cluster factor
$ INIT/CLUS=16380 gives no problem
The second case is remarkable! I would not be surprised is we saw Guy Peleg's hand/mind in that error message!
OK, only the HELP text needs correction now, both HELP INIT /CLUSTER and HELP/MESS/FAC=INIT CLUSTER.
Regards,
Bart Zorn
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 01:31 AM
тАО02-13-2006 01:31 AM
SolutionKnowing this now, I would NOT pick 16380 though.
I might pick 16,000 (128*125), or 10,000 (625*16), or 10240 (10*64*16),
or 16368 (1023*16)
For starters, due to a performance aberation in the EVA Raid5 code you want at least a multiple of 4 (which 16380 is).
Next the XFC deals with 16 block 'cache lines',
and RMS sequential file access defaults tend to be set at 16 (old), or 32 (new)
Now you might observe that the XFC, and RMS, deal with Virtual blocks, within the file, so independend of cluster size?
Well, there is of course that one block at the end of a cluster which might cause a split IO. No big deal considering it will be 1 in more than 600 right?
But for Indexed files, when RMS does an extent (every area is a new extent), RMS will align on cluster boundary. So the only way to stay (get) aligned on xfc cache lines there is to have the clusters be a mutliple of 16.
Of course, with all of this we are not talking 100% more performance, but 'possibly noticable'. Like 5% tops.
Why 10,000 ?
Because is looks nice and is still divisible by 16 :-).
fwiw,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 02:15 AM
тАО02-13-2006 02:15 AM
Re: Maximum clustersize for disks
Thank you for your explanation!
In my case I just wanted to create 4 identical files, as big as possible on the given disk. Just sequential files, nothing fancy.
So I started out with a big cluster size, just because I didn't know why I should make it smaller! That's what triggered the bug in the rebuild code.
Regards,
Bart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 07:27 AM
тАО02-13-2006 07:27 AM
Re: Maximum clustersize for disks
This sounds like a serious bug in the rebuild code. Could you please ensure it's formally reported to OpenVMS engineering?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 09:01 AM
тАО02-13-2006 09:01 AM
Re: Maximum clustersize for disks
Second look...
I've checked V7.2-*, V7.3-* and V8.2 HELP texts. They all say the absolute maximum cluster size is 16382. So, the bug where pre V8.2 systems would accept your invalid size has been fixed, now giving BADCLUSTER. That's good.
But, I'm very curious about the INIT-F-CLUSTER error (probably NOT Guy's work though). What is the EXACT size of the disk you were initializing? I realise it's unlikely, but could it be you're exactly on the cusp? Please post the output of SHOW DEV/FULL so we can see the geometry.
As Hein has pointed out, 16382 would not be a good choice for a modern disk, since it's not a multiple of 16 or 4. 16380 is better (multiple of 4) and 16368 even better (multiple of 16). Realistically, once the clusters are that big a variation of 16 blocks either way can't be a huge deal, so although there's probably an OBOE in the range check, I can't imagine it would be a high priority fix.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 06:12 PM
тАО02-13-2006 06:12 PM
Re: Maximum clustersize for disks
if you want to create four identical files with max file size, use the MC SYSGEN CREATE command. But be aware. To create four contigoues files you have to use a trick. You have to ask the TSC for this trick. (I can't remember that. It's long time ago I did get the answer).
Best regards R. Wingert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2006 06:37 PM
тАО02-13-2006 06:37 PM
Re: Maximum clustersize for disks
Yes, it has been reported to engineering via a service call. The engineer in Belgium (Alain Banken) was able to reproduce the problem.
The disk is a volume on a HDS box (not a XP!) of 75502080 blocks. I have it initialized with a clustersize of 1024 now.
To Rudolf:
Indeed, SYSGEN cannot do it directly. I used a combination of the LD and DFU utilities. And I first created the files, and afterwards the directories and renamed them into those.
Thank you for your reactions.
Bart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-14-2006 09:03 AM
тАО02-14-2006 09:03 AM
Re: Maximum clustersize for disks
Although your disk size, 75502080, is divisible by 16, it's not divisible by your proposed cluster size (any of them). There's a fairly large chunk left on the end. I'm not sure if INITIALIZE cares about it, or what it does about the left over fragment.
If you have any control over the size of the virtual unit, I'd recommend ensuring it's an exact multiple of your proposed cluster size (itself a multiple of 16). Note that your current size isn't a multiple of 1024.
Yes, everything *should* work regardless, but if you can make it easier for the algorithms by avoiding any questions of rounding or left over fragments, why not do it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-14-2006 07:05 PM
тАО02-14-2006 07:05 PM
Re: Maximum clustersize for disks
Thanks, I will certainly do that the next time it comes up. I don't really have control over the exact size of the volumes on the HDS box, as they are the result of one of the many configuration models they have.
This time I have 4 neatly contiguous file and only 8192 blocks (plus the last half cluster) left unused.
Regards,
Bart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-16-2006 09:13 AM
тАО02-16-2006 09:13 AM
Re: Maximum clustersize for disks
Coincidentally, a fix has just been entered into the V8.2 remedial stream to fix some inconsistencies in where and how the cluster size was checked by INIT.
Apparently there were two places where the value was checked, using different constants. They're now consistent, using a value of 16380.