Operating System - OpenVMS
1827603 Members
2726 Online
109966 Solutions
New Discussion

RMS Block & Buffer counts

 
Aaron Lewis_1
Frequent Advisor

RMS Block & Buffer counts

In this thread: http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=917271

It was said that on an EVA the optimum settings for a backup are Block=124 & buff=3. I was wondering how this was determined, and what the optimum(s) might be for non-EVA drives.
6 REPLIES 6
Ian Miller.
Honored Contributor

Re: RMS Block & Buffer counts

the comment about block=124 was that EVA work best with I/O requests that are a mutiple of 4 and 124 is the largest multiple of 4 which less than 127 (current max block size for COPY).

The comment re using 3 buffers is that sometimes when accessing a file in a sequential way (like COPY does) when using a few buffers and read ahead helps performance. 3 may not be the most optimum but its a reasonable starting point.
____________________
Purely Personal Opinion
Robert Gezelter
Honored Contributor

Re: RMS Block & Buffer counts

Aaron,

I can concur with Ian on the comment about 124 for the blocking factor. A number of less than 3 for the buffering factor is unreasonable.

That said, I have done quite a bit of work using very much higher buffering factors, with impressive performance gains. It depends upon your configuration and workload.

- Bob Gezelter, http://www.rlgsc.com
Hein van den Heuvel
Honored Contributor

Re: RMS Block & Buffer counts


The multiple of 4 is specifically critical for Raid-5 on EVA. Any other (storage sub-) system will have different values. But anyway, multiples or 8 or 16 and likely to be happy choices no matter what!

For VMS specificly a multiple of 16 also helps the XFC algoritmes within the files.
So my recommendation is actually /BLOCK=96 (or 112).
I have not recently verified this with experiments.

Please note that this multiple of 4 in the buffer size only helps if you start out alligned! If you start out 'odd' then a multiple of 4 will garantuee it will never be right again ;-(.
The solution/recomendation from this is to select a CLUSTERSIZE with a power of 2: For example 8, 16, 32, 64, 128, 256, 512, or even 1024 if/when you deal mostly with large files.

Many moons ago, while in RMS Engineering, I ran serious experiments with the number of buffers. Best I can tell those settings are NOT useful for COPY, as it does its own block IO, and does not pick up the RMS defaults (for now?).
The SET RMS values are are used by rms for record IO tasks, and the CRTL will also pick up the values for its optimizations.

From those experiments back then, I recall that (obviously) going from 1 to 2 buffers made the biggest change. Beyond 4 buffers I saw only very small further improvements. With larger buffer sizes, I suspect that 3 buffers will get you into 95% of the absolute max reachable.

Greetings,
Hein.
Garry Fruth
Trusted Contributor

Re: RMS Block & Buffer counts

For what its worth, for block size 120 may be a good choice for many hardware technologies. It is a multiple of 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, & 60.

But if 16 is an important factor, then you may want to pick 112, which has factors 2, 4, 7, 8, 14, 16, 28, & 56.

Uwe Zessin
Honored Contributor

Re: RMS Block & Buffer counts

The max. payload of a fibre channel frame is up to 2112 bytes. There you have your multiple of 4: 4*512 = 2048. Theoretically the fibre channel hardware can send several megabytes in one I/O - the segmenting in multiple frames is supposed to be done entirely in hardware.

I really don't see how the value of 4 is related to EVA's VRAID-5 implementation:
the EVA uses a chunk size of 128 KBytes and it will attempt to coalesce multiple writes if they're smallen than the chunk size. VRAID-5 uses a 4D+1P mechanism, so a full stripe covers 4*128=512 KBytes of user data and hits 5 different disk drives.
.
Hein van den Heuvel
Honored Contributor

Re: RMS Block & Buffer counts

>> I really don't see how the value of 4 is related to EVA's VRAID-5 implementation:

It is very hard to believe, but the effect is there.
The problem is in the EVA algoritme that detects full stripe writes.
For those cases the raid-5 can just plunk down the parity chunk calculated directly from the datastream with the pre-read.
So 4 OS writes become 5 disk writes.
If it does not detect a full stripe, then each OS writes turns into read-old-data, read-old-parity, calculate new parity, write-new-data, write-new-parity: 2 reads + 2 writes for each.

[Note, this is all from hallway conversation, and coffee corner speculation. It is not an official engineering answer ]

fwiw,
Hein.