- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- RMS Block & Buffer counts
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 01:34 AM
06-24-2005 01:34 AM
RMS Block & Buffer counts
It was said that on an EVA the optimum settings for a backup are Block=124 & buff=3. I was wondering how this was determined, and what the optimum(s) might be for non-EVA drives.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 01:47 AM
06-24-2005 01:47 AM
Re: RMS Block & Buffer counts
The comment re using 3 buffers is that sometimes when accessing a file in a sequential way (like COPY does) when using a few buffers and read ahead helps performance. 3 may not be the most optimum but its a reasonable starting point.
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 02:29 AM
06-24-2005 02:29 AM
Re: RMS Block & Buffer counts
I can concur with Ian on the comment about 124 for the blocking factor. A number of less than 3 for the buffering factor is unreasonable.
That said, I have done quite a bit of work using very much higher buffering factors, with impressive performance gains. It depends upon your configuration and workload.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 03:11 AM
06-24-2005 03:11 AM
Re: RMS Block & Buffer counts
The multiple of 4 is specifically critical for Raid-5 on EVA. Any other (storage sub-) system will have different values. But anyway, multiples or 8 or 16 and likely to be happy choices no matter what!
For VMS specificly a multiple of 16 also helps the XFC algoritmes within the files.
So my recommendation is actually /BLOCK=96 (or 112).
I have not recently verified this with experiments.
Please note that this multiple of 4 in the buffer size only helps if you start out alligned! If you start out 'odd' then a multiple of 4 will garantuee it will never be right again ;-(.
The solution/recomendation from this is to select a CLUSTERSIZE with a power of 2: For example 8, 16, 32, 64, 128, 256, 512, or even 1024 if/when you deal mostly with large files.
Many moons ago, while in RMS Engineering, I ran serious experiments with the number of buffers. Best I can tell those settings are NOT useful for COPY, as it does its own block IO, and does not pick up the RMS defaults (for now?).
The SET RMS values are are used by rms for record IO tasks, and the CRTL will also pick up the values for its optimizations.
From those experiments back then, I recall that (obviously) going from 1 to 2 buffers made the biggest change. Beyond 4 buffers I saw only very small further improvements. With larger buffer sizes, I suspect that 3 buffers will get you into 95% of the absolute max reachable.
Greetings,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 05:39 AM
06-24-2005 05:39 AM
Re: RMS Block & Buffer counts
But if 16 is an important factor, then you may want to pick 112, which has factors 2, 4, 7, 8, 14, 16, 28, & 56.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 06:07 AM
06-24-2005 06:07 AM
Re: RMS Block & Buffer counts
I really don't see how the value of 4 is related to EVA's VRAID-5 implementation:
the EVA uses a chunk size of 128 KBytes and it will attempt to coalesce multiple writes if they're smallen than the chunk size. VRAID-5 uses a 4D+1P mechanism, so a full stripe covers 4*128=512 KBytes of user data and hits 5 different disk drives.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-24-2005 08:35 AM
06-24-2005 08:35 AM
Re: RMS Block & Buffer counts
It is very hard to believe, but the effect is there.
The problem is in the EVA algoritme that detects full stripe writes.
For those cases the raid-5 can just plunk down the parity chunk calculated directly from the datastream with the pre-read.
So 4 OS writes become 5 disk writes.
If it does not detect a full stripe, then each OS writes turns into read-old-data, read-old-parity, calculate new parity, write-new-data, write-new-parity: 2 reads + 2 writes for each.
[Note, this is all from hallway conversation, and coffee corner speculation. It is not an official engineering answer ]
fwiw,
Hein.