HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Maximum clustersize for disks

 
SOLVED
Go to solution
Bart Zorn_1
Trusted Contributor

Maximum clustersize for disks

A while ago, I asked a question about the maximum clustersize you can use on a disk:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=977095

Conclusion was that you could use up to 1/100 of the total disk size. I tried INIT/CLUS=16384 and INIT did not complain. Neither did the subsequent MOUNT.

But... As a result of a crash of one of the cluster nodes, the disk needed a rebuild. The rebuild code does not like clustersizes which are big and this resulted in a tight loop in Kernel mode, which took away one of the four cpu's.

STOP/EXIT=KERNEL/ID=... stopped this process.

This all is/was on V7.3-2. In V8.2 INIT is a bit more picky:

$ INIT/CLUS=16384 gives %INIT-F-BADCLUSTER

$ INIT/CLUS=16382 gives %INIT-F-CLUSTER, unsuitable cluster factor

$ INIT/CLUS=16380 gives no problem

The second case is remarkable! I would not be surprised is we saw Guy Peleg's hand/mind in that error message!

OK, only the HELP text needs correction now, both HELP INIT /CLUSTER and HELP/MESS/FAC=INIT CLUSTER.

Regards,

Bart Zorn
11 REPLIES
Hein van den Heuvel
Honored Contributor
Solution

Re: Maximum clustersize for disks

Bart, Thanks for posting that!

Knowing this now, I would NOT pick 16380 though.

I might pick 16,000 (128*125), or 10,000 (625*16), or 10240 (10*64*16),
or 16368 (1023*16)

For starters, due to a performance aberation in the EVA Raid5 code you want at least a multiple of 4 (which 16380 is).

Next the XFC deals with 16 block 'cache lines',
and RMS sequential file access defaults tend to be set at 16 (old), or 32 (new)
Now you might observe that the XFC, and RMS, deal with Virtual blocks, within the file, so independend of cluster size?
Well, there is of course that one block at the end of a cluster which might cause a split IO. No big deal considering it will be 1 in more than 600 right?

But for Indexed files, when RMS does an extent (every area is a new extent), RMS will align on cluster boundary. So the only way to stay (get) aligned on xfc cache lines there is to have the clusters be a mutliple of 16.

Of course, with all of this we are not talking 100% more performance, but 'possibly noticable'. Like 5% tops.

Why 10,000 ?
Because is looks nice and is still divisible by 16 :-).

fwiw,
Hein.
Bart Zorn_1
Trusted Contributor

Re: Maximum clustersize for disks

Hein,

Thank you for your explanation!

In my case I just wanted to create 4 identical files, as big as possible on the given disk. Just sequential files, nothing fancy.
So I started out with a big cluster size, just because I didn't know why I should make it smaller! That's what triggered the bug in the rebuild code.

Regards,

Bart
John Gillings
Honored Contributor

Re: Maximum clustersize for disks

Bart,
This sounds like a serious bug in the rebuild code. Could you please ensure it's formally reported to OpenVMS engineering?
A crucible of informative mistakes
John Gillings
Honored Contributor

Re: Maximum clustersize for disks

Bart,

Second look...

I've checked V7.2-*, V7.3-* and V8.2 HELP texts. They all say the absolute maximum cluster size is 16382. So, the bug where pre V8.2 systems would accept your invalid size has been fixed, now giving BADCLUSTER. That's good.

But, I'm very curious about the INIT-F-CLUSTER error (probably NOT Guy's work though). What is the EXACT size of the disk you were initializing? I realise it's unlikely, but could it be you're exactly on the cusp? Please post the output of SHOW DEV/FULL so we can see the geometry.

As Hein has pointed out, 16382 would not be a good choice for a modern disk, since it's not a multiple of 16 or 4. 16380 is better (multiple of 4) and 16368 even better (multiple of 16). Realistically, once the clusters are that big a variation of 16 blocks either way can't be a huge deal, so although there's probably an OBOE in the range check, I can't imagine it would be a high priority fix.
A crucible of informative mistakes
Rudolf Wingert
Frequent Advisor

Re: Maximum clustersize for disks

Hello,
if you want to create four identical files with max file size, use the MC SYSGEN CREATE command. But be aware. To create four contigoues files you have to use a trick. You have to ask the TSC for this trick. (I can't remember that. It's long time ago I did get the answer).
Best regards R. Wingert
Bart Zorn_1
Trusted Contributor

Re: Maximum clustersize for disks

To John:

Yes, it has been reported to engineering via a service call. The engineer in Belgium (Alain Banken) was able to reproduce the problem.

The disk is a volume on a HDS box (not a XP!) of 75502080 blocks. I have it initialized with a clustersize of 1024 now.

To Rudolf:

Indeed, SYSGEN cannot do it directly. I used a combination of the LD and DFU utilities. And I first created the files, and afterwards the directories and renamed them into those.

Thank you for your reactions.

Bart
John Gillings
Honored Contributor

Re: Maximum clustersize for disks

Bart,
Although your disk size, 75502080, is divisible by 16, it's not divisible by your proposed cluster size (any of them). There's a fairly large chunk left on the end. I'm not sure if INITIALIZE cares about it, or what it does about the left over fragment.

If you have any control over the size of the virtual unit, I'd recommend ensuring it's an exact multiple of your proposed cluster size (itself a multiple of 16). Note that your current size isn't a multiple of 1024.

Yes, everything *should* work regardless, but if you can make it easier for the algorithms by avoiding any questions of rounding or left over fragments, why not do it?
A crucible of informative mistakes
Bart Zorn_1
Trusted Contributor

Re: Maximum clustersize for disks

John

Thanks, I will certainly do that the next time it comes up. I don't really have control over the exact size of the volumes on the HDS box, as they are the result of one of the many configuration models they have.

This time I have 4 neatly contiguous file and only 8192 blocks (plus the last half cluster) left unused.

Regards,

Bart
John Gillings
Honored Contributor

Re: Maximum clustersize for disks

Bart,

Coincidentally, a fix has just been entered into the V8.2 remedial stream to fix some inconsistencies in where and how the cluster size was checked by INIT.

Apparently there were two places where the value was checked, using different constants. They're now consistent, using a value of 16380.
A crucible of informative mistakes
Bart Zorn_1
Trusted Contributor

Re: Maximum clustersize for disks

John,

That is good to hear! Of course that does not fix the bug in the rebuild code, but it will make it less likely to occur.

Regards,

Bart
Bart Zorn_1
Trusted Contributor

Re: Maximum clustersize for disks

Closed per John Travell's suggestion.