<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic System Speed in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504694#M67378</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have a some oracle database files that were located on a four disk spripeset we have now moved some of these files onto another disk which is 3 disk raid5... now the system seems to have slowed would this have had an effect and if so how much difference is there, or should i be looking elsewhere???</description>
    <pubDate>Tue, 15 Mar 2005 04:47:30 GMT</pubDate>
    <dc:creator>Peter Clarke</dc:creator>
    <dc:date>2005-03-15T04:47:30Z</dc:date>
    <item>
      <title>System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504694#M67378</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have a some oracle database files that were located on a four disk spripeset we have now moved some of these files onto another disk which is 3 disk raid5... now the system seems to have slowed would this have had an effect and if so how much difference is there, or should i be looking elsewhere???</description>
      <pubDate>Tue, 15 Mar 2005 04:47:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504694#M67378</guid>
      <dc:creator>Peter Clarke</dc:creator>
      <dc:date>2005-03-15T04:47:30Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504695#M67379</link>
      <description>There is some overhead in maintaining the RAID5  compared to a stripeset. How much change have you seen and are the scsi bus and disks the same ?</description>
      <pubDate>Tue, 15 Mar 2005 05:08:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504695#M67379</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-03-15T05:08:17Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504696#M67380</link>
      <description>On your 4-disk stripeset, all 4 member are actively writing data, depending on the chunk size, the databsae IO's are spread across all disks.&lt;BR /&gt;&lt;BR /&gt;On the other hand, on a 3-disk RAID5 set, for every write the according parity block has to be read, the parity recalculated and the parity block has to be rewritten ( a somewhat simplistic description). How big ths performancepenalty is, depends largely on the available cache in the raid-controller.&lt;BR /&gt;Without cache, a RAID5 is not good at writing.&lt;BR /&gt;&lt;BR /&gt;But notice, that a 4disk stripe set has a very poor MTBF, because, if one disk fails to whole set fails. This is normally avoided by shadow the stripeset members.&lt;BR /&gt;&lt;BR /&gt;mfg Kalle</description>
      <pubDate>Tue, 15 Mar 2005 05:09:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504696#M67380</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2005-03-15T05:09:10Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504697#M67381</link>
      <description>&lt;BR /&gt;As pointed out, and as you probably knew, raid-5 writes have additional IOs. For each simple (not spanning chunks) write the storage system needs to read-orig, read-parity, write-new-date, write-new-parity.&lt;BR /&gt;So 4 IOs for the the price of 1.&lt;BR /&gt;For a stripeset that is just 1 IO done for 1 Io issued.&lt;BR /&gt;Now the controller, thanks to write-back caching technologies, will hide the increased write latency. It will probably report near instant writes (1ms?) ... faster than a single disk IO (5ms?). However, those IOs do need to be done even if the writer is not waiting, and the disks will be activated, which can seriously disturb read IO. Readers, have no choice but to wait.&lt;BR /&gt;&lt;BR /&gt;Now let's exagerate a little. Let's say your disk can do 4 IO/sec, you have 1 IO/sec write to the set and 3 IO/sec read, all random over the block space.&lt;BR /&gt;&lt;BR /&gt;With the stripeset you have 4 physical IO/sec to 4 disks. That can handle 16 IO/sec. Very simplistically speaking you will have 1 in 4 odds that you have to wait for an IO in progress.&lt;BR /&gt;With the 3 member raid-5 you will need 7 pysical IOs and can handle 12 IO/sec. Suddenly yoy have more than 50% odds that a read will be delayed.&lt;BR /&gt;&lt;BR /&gt;For a 99% read load, a raid-5 set will perform as well as a stripeset with the same amount of members for anything less than 90% read, you'll need MORE members in a raid5 set than a stripeset to be able to handle the 30% (or worse) increase in physical IOs.&lt;BR /&gt;You have FEWER member, so you can expect a slowdown even with modest activity.&lt;BR /&gt;&lt;BR /&gt;The exact behaviour of course depends on many factors: IO controller, bus, disk speeds, read-write ratio, IO/sec MB/IO and so on.&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Mar 2005 08:19:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504697#M67381</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-03-15T08:19:13Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504698#M67382</link>
      <description>Hein,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;So 4 IOs for the the price of 1.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;I would rather define that as&lt;BR /&gt;You have 1 (functional) IO for the price of 4 (actual) IOs  !!   :-)&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Tue, 15 Mar 2005 09:13:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504698#M67382</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-03-15T09:13:10Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504699#M67383</link>
      <description>RAID5 was a great idea when disks were small and expensive. It provided a fairly good tradeoff - large(r) capacity virtual drives with reasonably good reliability for a moderate reduction in performance.&lt;BR /&gt;&lt;BR /&gt;Today disks are large and cheap. The RAID5 tradeoff is a false economy. For a small increase in price you can RAID0+1 all volumes, or, even better host based volume shadow your stripe sets. You get excellent reliability and performance, even in failure mode. &lt;BR /&gt;&lt;BR /&gt;Even worse... We find people think RAID5 volumes made from modern high capacity drives are "too big" so they partition them. Contention between the partitions makes performance terrible (worst possible seek pattern), and, if a physical disk ever fails, performance is really atrocious.&lt;BR /&gt;&lt;BR /&gt;RAID5 is a technology that's had its day. Please try and avoid it!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Mar 2005 15:17:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504699#M67383</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2005-03-15T15:17:24Z</dc:date>
    </item>
    <item>
      <title>Re: System Speed</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504700#M67384</link>
      <description>Amen to John,&lt;BR /&gt;&lt;BR /&gt;raid5 leaves you very vulnerable after you've had a failure, while the entire storage set rebuilds to include a spare drive. A failure during this time = you're toast. If you then want to replace the fialed drive in the raid set, you remove the spare, and add the replacement, creating another window of vulnerability.</description>
      <pubDate>Wed, 16 Mar 2005 19:59:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-speed/m-p/3504700#M67384</guid>
      <dc:creator>Tom O'Toole</dc:creator>
      <dc:date>2005-03-16T19:59:39Z</dc:date>
    </item>
  </channel>
</rss>

