<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RAID5 or RAID6 in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736941#M36699</link>
    <description>Thanks for your insights!&lt;BR /&gt;&lt;BR /&gt;Slight change of rules... ;)&lt;BR /&gt;Server is for file serving, and employees are around 200. Still use 5+1P (RAID5), or go for RAID6 (5+2P)?&lt;BR /&gt;&lt;BR /&gt;Are there any whitepapers that deal with failure rates and chances on failure in both situations?</description>
    <pubDate>Tue, 11 Jan 2011 21:45:53 GMT</pubDate>
    <dc:creator>Bakxm_1</dc:creator>
    <dc:date>2011-01-11T21:45:53Z</dc:date>
    <item>
      <title>RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736935#M36693</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;I'm looking for any insights you want to share on whether to chose RAID6 over RAID5. Performance (I/O) is not an issue. It's the number of drives in a single array, failure and rebuild reliability where i'm after.&lt;BR /&gt;Is there a rule-of-thumb at what number of drives which RAID level to choose ?&lt;BR /&gt;If there are whitepapers or hard numbers, they are more than welcome.&lt;BR /&gt;&lt;BR /&gt;We're thinking of creating 6 disk and 14 disk arrays in RAID5 because of legacy configurations, but these were only 3 disk arrays (U320).&lt;BR /&gt;&lt;BR /&gt;New hardware is DL380G7 with 146GB SAS disks.&lt;BR /&gt;&lt;BR /&gt;TIA, regards Marcel</description>
      <pubDate>Tue, 11 Jan 2011 20:49:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736935#M36693</guid>
      <dc:creator>Bakxm_1</dc:creator>
      <dc:date>2011-01-11T20:49:55Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736936#M36694</link>
      <description>On what controller?&lt;BR /&gt;Is this a HW based array you're planning -- a SmartArray based one or Software RAID (what OS?)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 11 Jan 2011 20:55:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736936#M36694</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-01-11T20:55:08Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736937#M36695</link>
      <description>RAID array will be based on the onboard P400(?) Array controller.&lt;BR /&gt;The OS that will be run is WS08r2</description>
      <pubDate>Tue, 11 Jan 2011 20:58:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736937#M36695</guid>
      <dc:creator>Bakxm_1</dc:creator>
      <dc:date>2011-01-11T20:58:33Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736938#M36696</link>
      <description>I'd go with RAID5 regardless of whether it is HW based or Software based. And no wider than 7+1P. And configure HotSpares instead.&lt;BR /&gt;&lt;BR /&gt;Drives and Controllers these days have smarts in them (SMART std) and should have proactiveness to kick in a sparing operation. Also, the drives in question are "small" and rebuild times should be fast.&lt;BR /&gt;&lt;BR /&gt;Now if my disks are 1 to 2 TB, then I would think twice even at 3+1P.. I'd go with RAID6 still.&lt;BR /&gt;&lt;BR /&gt;Ensure you have aerting mechanisms in place though (multiple emails) to make you aware when a sparing or rebuild has happened so you can promptly do the physical replacement.&lt;BR /&gt;&lt;BR /&gt;Caveat Emptor though.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 11 Jan 2011 21:12:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736938#M36696</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-01-11T21:12:01Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736939#M36697</link>
      <description>If I understand correctly, it's more of a disk size (&amp;gt;1TB) issue when making a choice between RAID5 or 6 than it is the total number of disks in the array ?&lt;BR /&gt;&lt;BR /&gt;TIA</description>
      <pubDate>Tue, 11 Jan 2011 21:23:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736939#M36697</guid>
      <dc:creator>Bakxm_1</dc:creator>
      <dc:date>2011-01-11T21:23:12Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736940#M36698</link>
      <description>Yes -- that is STILL my current basis for deciding between RAID5 and RAID6. &lt;BR /&gt;&lt;BR /&gt;Unless of course your risk acceptance level is pretty low.&lt;BR /&gt;&lt;BR /&gt;I build NASes and Fileservers on Linux/OpenSolaris for friends and special clients using commodity parts - PC motherboards and consumer SATA drives. For uber-critical data with no special performance requirements -- it is always 3+1P with HotSpare or 2D+2D (RAID1+0) with drive sizes of 120 to 750GB. Above that, It is always RAID6 for critical filesystems. For Media Serving duties where loss of data is not critical (i.e. DVR application) - it is usualy just RAID5 even for 2TB Drives (very long build times though but I use ZFS mostly to do away with the long build times)&lt;BR /&gt;</description>
      <pubDate>Tue, 11 Jan 2011 21:31:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736940#M36698</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-01-11T21:31:13Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736941#M36699</link>
      <description>Thanks for your insights!&lt;BR /&gt;&lt;BR /&gt;Slight change of rules... ;)&lt;BR /&gt;Server is for file serving, and employees are around 200. Still use 5+1P (RAID5), or go for RAID6 (5+2P)?&lt;BR /&gt;&lt;BR /&gt;Are there any whitepapers that deal with failure rates and chances on failure in both situations?</description>
      <pubDate>Tue, 11 Jan 2011 21:45:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736941#M36699</guid>
      <dc:creator>Bakxm_1</dc:creator>
      <dc:date>2011-01-11T21:45:53Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736942#M36700</link>
      <description>Nope. no WP that I am aware of. I'd go with 7+1 if the P400i allows it.&lt;BR /&gt;&lt;BR /&gt;One thing you can be aware about is MTBFs. Usually drives obey MTBF ratings... for SAS enterprise drives -- it is usually over 1 million hours.&lt;BR /&gt;&lt;BR /&gt;If MTBF is just about 70 to 80 percent -- I usually start thinking of being pro-active with spares and looking at SMART diagnostics&lt;BR /&gt;</description>
      <pubDate>Tue, 11 Jan 2011 21:52:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736942#M36700</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-01-11T21:52:55Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736943#M36701</link>
      <description>Here's some reading stuff on the subject.&lt;BR /&gt;&lt;A href="http://h50146.www5.hp.com/products/storage/whitepaper/pdfs/c00386950.pdf" target="_blank"&gt;http://h50146.www5.hp.com/products/storage/whitepaper/pdfs/c00386950.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;But something to take into consideration, is the rebuild time with large and slow disks.&lt;BR /&gt;It could take more than a week.&lt;BR /&gt;&lt;BR /&gt;BR&lt;BR /&gt;/jag</description>
      <pubDate>Wed, 12 Jan 2011 13:29:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736943#M36701</guid>
      <dc:creator>gregersenj</dc:creator>
      <dc:date>2011-01-12T13:29:22Z</dc:date>
    </item>
    <item>
      <title>Re: RAID5 or RAID6</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736944#M36702</link>
      <description>It's not so much of an issue of drive size, but time to rebuild (which drive size impacts significantly).&lt;BR /&gt;&lt;BR /&gt;This is not so much of a problem with only using internal drive bays as opposed to full external enclosures (with 20 or more drives available.)&lt;BR /&gt;&lt;BR /&gt;You will also find that there is a notable write penalty by using RAID6 over RAID5. There are lots of documents on this (google is your friend.)&lt;BR /&gt;&lt;BR /&gt;I agree with Alzhy though, 6+1+HS would be better than 7+1. If you were really worried and the write penalty wasn't a problem, 5+2+HS would be okay as well. Extra parity can never beat an online hot spare in my opinion.&lt;BR /&gt;&lt;BR /&gt;For MTBFs, be careful. These are caclulated, not always tested values. They also mean, that if you have 1 million drives, and an MTBF of 1m hours, you should see one drive fail per hour. In particular, I've seen this with large compute clusters and it maps perfectly to the MTBF (186000hrs, 250 nodes = 1 failure per month, 1000 nodes, 1 per week.)&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Don&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Jan 2011 14:25:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-or-raid6/m-p/4736944#M36702</guid>
      <dc:creator>Don Mallory</dc:creator>
      <dc:date>2011-01-12T14:25:08Z</dc:date>
    </item>
  </channel>
</rss>

