<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: raid 0+1 in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691773#M18756</link>
    <description>Hi zungwon ,&lt;BR /&gt; Please find the attached document. Hope this will help you to clarify..&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Shameer</description>
    <pubDate>Sun, 29 Jan 2006 23:04:59 GMT</pubDate>
    <dc:creator>Shameer.V.A</dc:creator>
    <dc:date>2006-01-29T23:04:59Z</dc:date>
    <item>
      <title>raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691766#M18749</link>
      <description>is it same raid configuration betweeen 0+1 and 1+0?</description>
      <pubDate>Wed, 14 Dec 2005 19:36:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691766#M18749</guid>
      <dc:creator>???_185</dc:creator>
      <dc:date>2005-12-14T19:36:37Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691767#M18750</link>
      <description>Yep - they both mean the same (Striped Mirroring) and they might be used interchangeably.</description>
      <pubDate>Thu, 15 Dec 2005 00:50:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691767#M18750</guid>
      <dc:creator>Srinivasa_6</dc:creator>
      <dc:date>2005-12-15T00:50:13Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691768#M18751</link>
      <description>I don't recall if there is any standard that describes what is what (striped mirrors vs. mirrored stripes). I never use 0+1 or 1+0, but always talk about SM or MS to avoid confusion.</description>
      <pubDate>Thu, 15 Dec 2005 03:11:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691768#M18751</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-12-15T03:11:03Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691769#M18752</link>
      <description>I stand corrected. In fact, there is a slight difference. &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.ofb.net/~jheiss/raid10/" target="_blank"&gt;http://www.ofb.net/~jheiss/raid10/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Dec 2005 03:44:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691769#M18752</guid>
      <dc:creator>Srinivasa_6</dc:creator>
      <dc:date>2005-12-15T03:44:17Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691770#M18753</link>
      <description>Hi folks!&lt;BR /&gt;&lt;BR /&gt;In my humble opinion there is a huge difference...&lt;BR /&gt;&lt;BR /&gt;Mirrored stripes is dangerous to use!&lt;BR /&gt;They are infact two stripes that you mirror.&lt;BR /&gt;&lt;BR /&gt;If you loose 1 diskdrive, the entire stripe will be lost, and youre running on a RAID 0. That means that you can only loose one disk in that configuration.&lt;BR /&gt;&lt;BR /&gt;Stiped mirrors on the other hand, is multiple mirrors that you stripe. That means that you can loose 50% of your drives and still be up and running.&lt;BR /&gt;&lt;BR /&gt;I would never set up a mirrored stripe in a working enviroment. &lt;BR /&gt;&lt;BR /&gt;But..&lt;BR /&gt;Who says that I'm allways right, actually I was wrong once in 1976. (-;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Dragomir Zekic</description>
      <pubDate>Thu, 15 Dec 2005 05:11:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691770#M18753</guid>
      <dc:creator>DragomirAtea</dc:creator>
      <dc:date>2005-12-15T05:11:24Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691771#M18754</link>
      <description>Dragomir, your explanation is almost correct.&lt;BR /&gt;&lt;BR /&gt;If you loose a drive from each set you loose access to your LUN.&lt;BR /&gt;With 0+1,&lt;BR /&gt; Drives 1 2 3 4     &lt;BR /&gt;Set 1 = 0 0 0 0&lt;BR /&gt;Set 2 = 0 0 0 0&lt;BR /&gt;&lt;BR /&gt;Set 1 is striped and Mirrored with Set 2.  If you loose a drive in Set 1, say dr2. No problem you have set 2 for backup. BUT if you loose ANY drive in the Set 2. Then you have lost all access.&lt;BR /&gt;&lt;BR /&gt;With 1+0&lt;BR /&gt; Drives 1 2 3 4     &lt;BR /&gt;Set 1 = 0 0 0 0&lt;BR /&gt;Set 2 = 0 0 0 0&lt;BR /&gt;&lt;BR /&gt;Each drive from each set is Mirrored with each other, then striped.  SET 1 DR1 is Mirrored with SET 2 DR1. Then continue down the line.  If we loose DR2 of set 1, no problem, we have the other drive2 as a mirror. The only time you could loose access if you lost drive2 of the Set 2.  As Dragomir stated, you can loose 50% of your drives as long as the drive isn't the mirrored pair.&lt;BR /&gt;&lt;BR /&gt;It is a matter of odds.&lt;BR /&gt;BTW, MSA and Smart Array Controllers do Raid 1+0.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 15 Dec 2005 09:18:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691771#M18754</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-12-15T09:18:35Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691772#M18755</link>
      <description>John,&lt;BR /&gt;how can I find what drives are in pair in raid 1 part of RAID1+0 array on SA (MSA) controllers?&lt;BR /&gt;If I split 1+0 array between 2 channels, are the members of pair on different channels?&lt;BR /&gt;&lt;BR /&gt;This is important to understand how can array in MSA survive with enclosure failure.</description>
      <pubDate>Sat, 28 Jan 2006 14:17:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691772#M18755</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-01-28T14:17:14Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691773#M18756</link>
      <description>Hi zungwon ,&lt;BR /&gt; Please find the attached document. Hope this will help you to clarify..&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Shameer</description>
      <pubDate>Sun, 29 Jan 2006 23:04:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691773#M18756</guid>
      <dc:creator>Shameer.V.A</dc:creator>
      <dc:date>2006-01-29T23:04:59Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691774#M18757</link>
      <description>Basil,&lt;BR /&gt;&lt;BR /&gt;Take 6 drives, 3 from shelf A and 3 from shelf B. Disk A1 and B1 will be mirrored. A2 and B2, ...&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Jan 2006 09:54:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691774#M18757</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2006-01-30T09:54:14Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691775#M18758</link>
      <description>Thank you, John!&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Jan 2006 12:44:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691775#M18758</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-01-30T12:44:34Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691776#M18759</link>
      <description>Hi &lt;BR /&gt;&lt;BR /&gt;There is a slight difference.&lt;BR /&gt;    &lt;BR /&gt;RAID 0+1 configuration where multiple disks are striped together into sets (sets A &amp;amp; B in the diagram, each set being as large as the resulting final volume), and then two or more sets are mirrored together. &lt;BR /&gt;&lt;BR /&gt;    RAID 1+0 configuration where two or more drives are mirrored together (mirrors 1-4 in the diagram), and then the mirrors (as many as are needed to result in the desired amount of space) are striped together.</description>
      <pubDate>Tue, 31 Jan 2006 00:48:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691776#M18759</guid>
      <dc:creator>dipesh_2</dc:creator>
      <dc:date>2006-01-31T00:48:59Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691777#M18760</link>
      <description>Hi Zung,&lt;BR /&gt;&lt;BR /&gt;DEFENETLY IT IS NOT SAME&lt;BR /&gt;&lt;BR /&gt;Simply&lt;BR /&gt;&lt;BR /&gt;raid 0+1 ==&amp;gt; striping the mirrored volume&lt;BR /&gt;&lt;BR /&gt;raid 1+0 ==&amp;gt; mirroring the striped volume&lt;BR /&gt;&lt;BR /&gt;the redundancy differs in both config.&lt;BR /&gt;the mirror can survive a single disk failure and stripe cant.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;With Regards,&lt;BR /&gt;&lt;BR /&gt;Siva&lt;BR /&gt;</description>
      <pubDate>Tue, 31 Jan 2006 01:12:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691777#M18760</guid>
      <dc:creator>Sivakumar TS</dc:creator>
      <dc:date>2006-01-31T01:12:08Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691778#M18761</link>
      <description>dipesh and Siva have exactly opposite definitions.&lt;BR /&gt;&lt;BR /&gt;That's why I said I always talk about striped/mirrors or mirrored/stripes ;-)</description>
      <pubDate>Tue, 31 Jan 2006 01:20:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691778#M18761</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-01-31T01:20:51Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691779#M18762</link>
      <description>From MSA CLI Guide:&lt;BR /&gt;&lt;BR /&gt;Note: If more than one pair of drives are included in a RAID 1 array, the data is&lt;BR /&gt;striped across the first half of the drives in the array and then each drive is mirrored to a&lt;BR /&gt;drive in the remaining half of the drives for fault tolerance. This method is referred to as&lt;BR /&gt;RAID 1+0.</description>
      <pubDate>Tue, 31 Jan 2006 17:28:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691779#M18762</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-01-31T17:28:30Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691780#M18763</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Using the CLI to create a 1+0 lun, will the MSA 1000 automatically mirror across enclosures if it can, or is the lun set-up based on the order of disks on the command line ?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Adrian</description>
      <pubDate>Thu, 02 Mar 2006 10:42:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691780#M18763</guid>
      <dc:creator>Adrian Parker</dc:creator>
      <dc:date>2006-03-02T10:42:18Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691781#M18764</link>
      <description>It's my understanding that the MSA controller firmware is "smart" enough to use different enclosures on its own. I believe somebody from the MSA team said so some time ago here on ITRC, but I am not sure I could find the thread easily.</description>
      <pubDate>Thu, 02 Mar 2006 11:26:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691781#M18764</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-03-02T11:26:27Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691782#M18765</link>
      <description>ZungWon,&lt;BR /&gt;&lt;BR /&gt;Peace!&lt;BR /&gt;&lt;BR /&gt;1. 0+1 and 1+0 both gives you the same capacity&lt;BR /&gt;&lt;BR /&gt;2. AS far as redundancy - 0+1 (stripe and mirror) will be able to handle more disk failures.&lt;BR /&gt;&lt;BR /&gt;3. Performance - it depends on where the RAIDing is done. If RAIDing is done on the host level say - using VxVM, and you are dealing with JBOD enclosures - one RAID scheme may be better than the other performance wise.&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Mar 2006 16:16:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691782#M18765</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-03-02T16:16:51Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691783#M18766</link>
      <description>Nelson,&lt;BR /&gt;&lt;BR /&gt;Please recheck your second statement.  &lt;BR /&gt;&lt;BR /&gt;RAID 1+0 offers more redundancy than 0+1.&lt;BR /&gt;&lt;BR /&gt;RAID 0+1 will only handle a single disk failure in each mirrored set.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Mar 2006 09:27:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691783#M18766</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2006-03-03T09:27:56Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691784#M18767</link>
      <description>John you are absolutely right. RAID10 indeed is more redundant.&lt;BR /&gt;&lt;BR /&gt;Take 8 disks, carve 4 mirror sets (RAID1), stripe(RAID0) accross this 4 mirror sets and you've RAID10. You can loose 1 disk from each mirror set and your stripe stays intact.&lt;BR /&gt;&lt;BR /&gt;In VxVM, you have what's called layered Volumes. You can have a stripe of stripes, s stripe or mirrors, a stripe of RAID5's, etc.. further incresing the reliability and scalability of storage.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Mar 2006 09:36:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691784#M18767</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-03-03T09:36:19Z</dc:date>
    </item>
    <item>
      <title>Re: raid 0+1</title>
      <link>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691785#M18768</link>
      <description>Both 1+0 and 0+1 can handle same quantity of disk failures (for instance, second disk can fail in already failed half of 0+1 mirrorset). But probability of array failure with subsequentd HDD errors is much different (1/2 in RAID0+1 vs. 1/number of mirrors in RAID1+0).&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Mar 2006 09:54:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/raid-0-1/m-p/3691785#M18768</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-03-03T09:54:40Z</dc:date>
    </item>
  </channel>
</rss>

