<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Raid5 with distributed parity layout and failure question in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195874#M11141</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;if you want to know more about raid5dp, which is double parity btw, look here:&lt;BR /&gt;&lt;A href="http://www.hp.com/products1/storage/products/disk_arrays/midrange/va7410/infolibrary/index.html" target="_blank"&gt;http://www.hp.com/products1/storage/products/disk_arrays/midrange/va7410/infolibrary/index.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;greetings,&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;</description>
    <pubDate>Thu, 19 Feb 2004 04:51:36 GMT</pubDate>
    <dc:creator>Michael Schulte zur Sur</dc:creator>
    <dc:date>2004-02-19T04:51:36Z</dc:date>
    <item>
      <title>Raid5 with distributed parity layout and failure question</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195870#M11137</link>
      <description>Hey Guys!&lt;BR /&gt;&lt;BR /&gt;I have a question about Raid5 with distributed parity, to refresh my aging memory.  Basically, this question related to two things...&lt;BR /&gt;&lt;BR /&gt;1. How parity data is actually striped and used across the disks.&lt;BR /&gt;&lt;BR /&gt;2. And a formula for how many disks can fail in a Raid5 with distributed parity, before the entire raid device goes down.&lt;BR /&gt;&lt;BR /&gt;(*Everything which follows assumes five disks...as that is what I am using for now...although a formula for less and/or more disks would be great! *)&lt;BR /&gt;&lt;BR /&gt;My understanding of raid5 with dist parity has the disk and parity laid out as in FIGURE 1 (see attched JPG if it doesn't make sense as text)&lt;BR /&gt;&lt;BR /&gt;FIGURE 1:&lt;BR /&gt;------------------------------------&lt;BR /&gt;|Disk1  Disk2  Disk3  Disk4  Disk5 |&lt;BR /&gt;|======|======|======|======|======|&lt;BR /&gt;|  1   |  2   | P1,2 |  3   |  4   |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;| P3,4 |  5   |  6   | P5,6 |  7   |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  8   | P7,8 |  9   |  10  |P9,10 |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  11  |  12  |P11,12|   13 |  14  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|P13,14|  15  |  16  |P15,16|  17  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  18  |P17,18|  19  |  20  |P19,20|&lt;BR /&gt;|__________________________________|&lt;BR /&gt;&lt;BR /&gt;...where the Parity for stripe 1 and 2 (P1,2) can be used to reconstruct slice 2 (using 1, and P1,2) or slice 1 (using 2 and P1,2) but P1,2 CANNOT reconstruct BOTH 1 and 2 (you must have either 1 OR 2 WITH P1,2 to reconstruct the missing data.)&lt;BR /&gt;&lt;BR /&gt;Also, if the above is the correct distribution, is the parity always distributed in this pattern...where you have 2 data slices(1 and 2), then their parity slice(P1,2), then two more data slices (3 and 4), then those two's parity slice (P3,4), ETC.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;HOWEVER, I have also seen diagrams such as follows in figure 2 ...&lt;BR /&gt;&lt;BR /&gt;(again, see the attached JPG if it doesn't layout right as text)&lt;BR /&gt;&lt;BR /&gt;FIGURE 2:&lt;BR /&gt;------------------------------------&lt;BR /&gt;|Disk1  Disk2  Disk3  Disk4  Disk5 |&lt;BR /&gt;|======|======|======|======|======|&lt;BR /&gt;|  1a  |  2a  | 3a   |  4a  |  Pa  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  1b  |  2b  | 3b   |  Pb  |  4b  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  1c  |  2c  | Pc   |  3c  |  4c  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  1d  |  Pd  | 2d   |  3d  |  4d  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  Pe  |  1e  | 2e   |  3e  |  4e  |&lt;BR /&gt;|------+------+------+------+------|&lt;BR /&gt;|  1f  |  2f  | 3f   |  4f  |  Pf  |&lt;BR /&gt;|__________________________________|&lt;BR /&gt;&lt;BR /&gt;...which infers (to me, at least) that the parity data for slices "a" (Pa) could be used to reconstruct (for example) slice 1a by using Pa, 2a, 3a and 4a.&lt;BR /&gt;&lt;BR /&gt;IF this IS the case...then how many active data slices do you need to reconstruct from parity (I.E. could you reconstruct BOTH 1a AND 3a using Pa, 2a, 4a?)&lt;BR /&gt;&lt;BR /&gt;The big reason for this question, besides a deeper understanding of parity distribution is...how many disks can the raid5 with distributed parity lose before the raid is non-functional.  &lt;BR /&gt;&lt;BR /&gt;If the distribution is as in the first diagram...I get a 0% chance of non-functionality with 1 disk failure, an 80% chance of non-functionality with 2 disk failures and a 100% chance with 3 disk failures.&lt;BR /&gt;&lt;BR /&gt;However, if it is as the second diagram lays it out you get a 0% chance of non-functionality with 1 disk failure and a 100% chance of disk non-functionality with 2 disk failures.&lt;BR /&gt;&lt;BR /&gt;Seems like a big difference, especially as the odds for a dysfunctional array  would decrease with the more disks added to the raid setup in diagram one...but would NOT necessarily decrease according to diagram two.&lt;BR /&gt;&lt;BR /&gt;Sorry for being do long winded, but I appreciate all the great help (as always!)&lt;BR /&gt;Mike&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Feb 2004 14:58:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195870#M11137</guid>
      <dc:creator>Mike_316</dc:creator>
      <dc:date>2004-02-18T14:58:38Z</dc:date>
    </item>
    <item>
      <title>Re: Raid5 with distributed parity layout and failure question</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195871#M11138</link>
      <description>Oh...I guess I can add the attachment here as well.&lt;BR /&gt;&lt;BR /&gt;Sorry for taking up space!&lt;BR /&gt;Mike&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Feb 2004 15:03:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195871#M11138</guid>
      <dc:creator>Mike_316</dc:creator>
      <dc:date>2004-02-18T15:03:36Z</dc:date>
    </item>
    <item>
      <title>Re: Raid5 with distributed parity layout and failure question</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195872#M11139</link>
      <description>Ah. Interesting. You first picture looks very similar to the way the EVA does RAID-5. It uses 4D+1P - 4 data segments and 1 parity segment which means a fixed overhead of 25%. However, due to the underlying virtualization the actual placement on the disks will look very different. I won't go into detail, because RAID-5 can be confusing enough.&lt;BR /&gt;&lt;BR /&gt;Your second figure looks similar to what DEC/COMPAQ/HP's StorageWorks HSx&lt;BR /&gt;controllers use. The parity is calculated horizontally over all disks, which means&lt;BR /&gt;that the 'overhead' becomes less with more disks. On the other hand you have to move way more data when a disk has failed.&lt;BR /&gt;&lt;BR /&gt;I have also seen a slightly different variant that looks like this:&lt;BR /&gt;|01|02|03|04|Pa|&lt;BR /&gt;|06|07|08|Pb|05|&lt;BR /&gt;|11|12|Pc|09|10|&lt;BR /&gt;&lt;BR /&gt;Parity is still calculated horizontally over all data (Pa=01x02x03x04, Pb=06x07x08x09 an so on), but there might be a little better throughput, because a sequential read will go steady  over all 5 disks (01,02,03,04,05 then 06,07,08,09,10 and so on).&lt;BR /&gt;&lt;BR /&gt;In both cases RAID-5 uses a single parity segment within a group of disks. For the EVA that group is always 5 disks (4D+1P), for other controllers that can be a variable number of disks.&lt;BR /&gt;&lt;BR /&gt;The array cannot reconstruct your example of 1a and 2a going bad. If more than one disk goes bad, you have lost too much data. Do you recall how RAID-5 uses the XOR mechanism to build parity or rebuild data?&lt;BR /&gt;&lt;BR /&gt;I'm not very good with math, sorry, but I think it is clear that with EVA's mechanism of 'sub-groups' or 'sub-slices' or whatever we like to call them, we can afford to loose one disk per 'sub-group'.&lt;BR /&gt;&lt;BR /&gt;There are some arrays (VA7xx0 and MSA1000) that can provide a 'stronger' parity protection. On the VA7xx0 it is called RAID5DP (Double Protection if I recall correctly) and on the MSA it is called ADG (Advanced Data Guarding).&lt;BR /&gt;&lt;BR /&gt;Both arrays, however do NOT duplicate the parity information on a second disk - you still cannot recover 2 lost data segments that way. I have read that they are using 2 different math. algorithms to calculate P and Q. The formula was a little bit too big for my small brain - I didn't understand&lt;BR /&gt;it, so I will leave it at that, OK?</description>
      <pubDate>Wed, 18 Feb 2004 16:06:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195872#M11139</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-02-18T16:06:56Z</dc:date>
    </item>
    <item>
      <title>Re: Raid5 with distributed parity layout and failure question</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195873#M11140</link>
      <description>Thanks for the excellent reply.  I am one of those sick individuals who likes both long winded replies...and brain-stifling mathematical formulas.  The information was very helpful and got me on the right track!&lt;BR /&gt;&lt;BR /&gt;Which basically means, I have discovered that my array will not support raid6, nor does it have the possibility of surviving multiple drive loss.  I can, however, daisy chain a second (or third, fourth, etc.) array to it and use them as mirrors or redundant data locations.&lt;BR /&gt;&lt;BR /&gt;Thanks again!&lt;BR /&gt;Mike&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Feb 2004 20:58:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195873#M11140</guid>
      <dc:creator>Mike_316</dc:creator>
      <dc:date>2004-02-18T20:58:47Z</dc:date>
    </item>
    <item>
      <title>Re: Raid5 with distributed parity layout and failure question</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195874#M11141</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;if you want to know more about raid5dp, which is double parity btw, look here:&lt;BR /&gt;&lt;A href="http://www.hp.com/products1/storage/products/disk_arrays/midrange/va7410/infolibrary/index.html" target="_blank"&gt;http://www.hp.com/products1/storage/products/disk_arrays/midrange/va7410/infolibrary/index.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;greetings,&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Feb 2004 04:51:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid5-with-distributed-parity-layout-and-failure-question/m-p/3195874#M11141</guid>
      <dc:creator>Michael Schulte zur Sur</dc:creator>
      <dc:date>2004-02-19T04:51:36Z</dc:date>
    </item>
  </channel>
</rss>

