<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RMS Block &amp;amp; Buffer counts in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570222#M6283</link>
    <description>For what its worth, for block size 120 may be a good choice for many hardware technologies.  It is a multiple of 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, &amp;amp; 60.  &lt;BR /&gt;&lt;BR /&gt;But if 16 is an important factor, then you may want to pick 112, which has factors 2, 4, 7, 8, 14, 16, 28, &amp;amp; 56.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 24 Jun 2005 12:39:06 GMT</pubDate>
    <dc:creator>Garry Fruth</dc:creator>
    <dc:date>2005-06-24T12:39:06Z</dc:date>
    <item>
      <title>RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570218#M6279</link>
      <description>In this thread: &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=917271" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=917271&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;It was said that on an EVA the optimum settings for a backup are Block=124 &amp;amp; buff=3. I was wondering how this was determined, and what the optimum(s) might be for non-EVA drives.</description>
      <pubDate>Fri, 24 Jun 2005 08:34:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570218#M6279</guid>
      <dc:creator>Aaron Lewis_1</dc:creator>
      <dc:date>2005-06-24T08:34:44Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570219#M6280</link>
      <description>the comment about block=124 was that EVA work best with I/O requests that are a mutiple of 4 and 124 is the largest multiple of 4 which less than 127 (current max block size for COPY).&lt;BR /&gt;&lt;BR /&gt;The comment re using 3 buffers is that sometimes when accessing a file in a sequential way (like COPY does) when using a few buffers and read ahead helps performance. 3 may not be the most optimum but its a reasonable starting point.</description>
      <pubDate>Fri, 24 Jun 2005 08:47:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570219#M6280</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-06-24T08:47:27Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570220#M6281</link>
      <description>Aaron,&lt;BR /&gt;&lt;BR /&gt;I can concur with Ian on the comment about 124 for the blocking factor. A number of less than 3 for the buffering factor is unreasonable.&lt;BR /&gt;&lt;BR /&gt;That said, I have done quite a bit of work using very much higher buffering factors, with impressive performance gains. It depends upon your configuration and workload.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 24 Jun 2005 09:29:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570220#M6281</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2005-06-24T09:29:13Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570221#M6282</link>
      <description>&lt;BR /&gt;The multiple of 4 is specifically critical for Raid-5 on EVA. Any other (storage sub-) system will have different values.  But anyway, multiples or 8 or 16 and likely to be happy choices no matter what!&lt;BR /&gt;&lt;BR /&gt;For VMS specificly a multiple of 16 also helps the XFC algoritmes within the files.&lt;BR /&gt;So my recommendation is actually /BLOCK=96 (or 112).&lt;BR /&gt;I have not recently verified this with experiments.&lt;BR /&gt;&lt;BR /&gt;Please note that this multiple of 4 in the buffer size only helps if you start out alligned! If you start out 'odd' then a multiple of 4 will garantuee it will never be right again ;-(. &lt;BR /&gt;The solution/recomendation from this is to select a CLUSTERSIZE with a power of 2: For example 8, 16, 32, 64, 128, 256, 512, or even 1024 if/when you deal mostly with large files.&lt;BR /&gt;&lt;BR /&gt;Many moons ago, while in RMS Engineering, I ran serious experiments with the number of buffers. Best I can tell those settings are NOT useful for COPY, as it does its own block IO, and does not pick up the RMS defaults (for now?).&lt;BR /&gt;The SET RMS values are are used by rms for record IO tasks, and the CRTL will also pick up the values for its optimizations.&lt;BR /&gt;&lt;BR /&gt;From those experiments back then, I recall that (obviously) going from 1 to 2 buffers made the biggest change. Beyond 4 buffers I saw only very small further improvements. With larger buffer sizes, I suspect that 3 buffers will get you into 95% of the absolute max reachable.&lt;BR /&gt;&lt;BR /&gt;Greetings,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Fri, 24 Jun 2005 10:11:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570221#M6282</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-06-24T10:11:48Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570222#M6283</link>
      <description>For what its worth, for block size 120 may be a good choice for many hardware technologies.  It is a multiple of 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, &amp;amp; 60.  &lt;BR /&gt;&lt;BR /&gt;But if 16 is an important factor, then you may want to pick 112, which has factors 2, 4, 7, 8, 14, 16, 28, &amp;amp; 56.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 24 Jun 2005 12:39:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570222#M6283</guid>
      <dc:creator>Garry Fruth</dc:creator>
      <dc:date>2005-06-24T12:39:06Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570223#M6284</link>
      <description>The max. payload of a fibre channel frame is up to 2112 bytes. There you have your multiple of 4: 4*512 = 2048. Theoretically the fibre channel hardware can send several megabytes in one I/O - the segmenting in multiple frames is supposed to be done entirely in hardware.&lt;BR /&gt;&lt;BR /&gt;I really don't see how the value of 4 is related to EVA's VRAID-5 implementation:&lt;BR /&gt;the EVA uses a chunk size of 128 KBytes and it will attempt to coalesce multiple writes if they're smallen than the chunk size. VRAID-5 uses a 4D+1P mechanism, so a full stripe covers 4*128=512 KBytes of user data and hits 5 different disk drives.</description>
      <pubDate>Fri, 24 Jun 2005 13:07:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570223#M6284</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-06-24T13:07:49Z</dc:date>
    </item>
    <item>
      <title>Re: RMS Block &amp; Buffer counts</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570224#M6285</link>
      <description>&amp;gt;&amp;gt; I really don't see how the value of 4 is related to EVA's VRAID-5 implementation:&lt;BR /&gt;&lt;BR /&gt;It is very hard to believe, but the effect is there.&lt;BR /&gt;The problem is in the EVA algoritme that detects full stripe writes.&lt;BR /&gt;For those cases the raid-5 can just plunk down the parity chunk calculated directly from the datastream with the pre-read.&lt;BR /&gt;So 4 OS writes become 5 disk writes.&lt;BR /&gt;If it does not detect a full stripe, then each OS writes turns into read-old-data, read-old-parity, calculate new parity, write-new-data, write-new-parity: 2 reads + 2 writes for each.&lt;BR /&gt;&lt;BR /&gt;[Note, this is all from hallway conversation, and coffee corner speculation. It is not an official engineering answer ]&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;Hein.</description>
      <pubDate>Fri, 24 Jun 2005 15:35:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rms-block-amp-buffer-counts/m-p/3570224#M6285</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-06-24T15:35:35Z</dc:date>
    </item>
  </channel>
</rss>

