<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Raid 5 Question ? in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791417#M2733</link>
    <description>Sent u the email.&lt;BR /&gt;Hope it helps.</description>
    <pubDate>Thu, 02 Jun 2011 07:47:30 GMT</pubDate>
    <dc:creator>Jitun</dc:creator>
    <dc:date>2011-06-02T07:47:30Z</dc:date>
    <item>
      <title>Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791408#M2724</link>
      <description>Hi ,&lt;BR /&gt;&lt;BR /&gt;Is anyone know how work the netraid5 volumes.&lt;BR /&gt;&lt;BR /&gt;in 4 nodes Cluster , the provisionning size for one volume (full provis.)is the same for a Netraid10 or netraid5  (single parity)volume ?&lt;BR /&gt;&lt;BR /&gt;I don't find documantation about this.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Didier DANEL</description>
      <pubDate>Tue, 24 May 2011 07:49:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791408#M2724</guid>
      <dc:creator>Danel_1</dc:creator>
      <dc:date>2011-05-24T07:49:23Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791409#M2725</link>
      <description>I do not know exactly what you mean but I suppose it is the following:&lt;BR /&gt;&lt;BR /&gt;With network RAID 10 (before it was called replication level 2) with 4 nodes you will loose 2 nodes because of the RAID 10 replication.&lt;BR /&gt;&lt;BR /&gt;With network RAID 5, 3 nodes will be used to write blocks of data, the 4th node will be used for the parity block.&lt;BR /&gt;&lt;BR /&gt;So with RAID 5 you have more effective space available as with RAID 10. With RAID 5 you loose the disk space of 1 node, with RAID 10 it is 50%.&lt;BR /&gt;&lt;BR /&gt;FYI RAID 5 is possible from 4 nodes on, RAID 6 is available starting from 6 nodes...</description>
      <pubDate>Wed, 25 May 2011 18:10:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791409#M2725</guid>
      <dc:creator>Bart_Heungens</dc:creator>
      <dc:date>2011-05-25T18:10:02Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791410#M2726</link>
      <description>Is the parity spread out between the nodes?</description>
      <pubDate>Thu, 26 May 2011 03:53:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791410#M2726</guid>
      <dc:creator>Johan Guldmyr</dc:creator>
      <dc:date>2011-05-26T03:53:27Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791411#M2727</link>
      <description>In fact the netraid 5 volumes(full provisionning) with 4 nodes consume exactly the same space that the netraid 10 volume&lt;BR /&gt;&lt;BR /&gt;I don't understand , because if 3 nodes are used for data blocks and one node for Parity block , only 25% of space  shoulf be lost.&lt;BR /&gt;&lt;BR /&gt;thanks&lt;BR /&gt;&lt;BR /&gt;Didier D.</description>
      <pubDate>Thu, 26 May 2011 04:17:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791411#M2727</guid>
      <dc:creator>Danel_1</dc:creator>
      <dc:date>2011-05-26T04:17:18Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791412#M2728</link>
      <description>This is indeed the normal space usage being 25% lost with 4 nodes.&lt;BR /&gt;&lt;BR /&gt;Can you provide some screenshots?&lt;BR /&gt;&lt;BR /&gt;I assume you created a single site cluster and not a multi site?</description>
      <pubDate>Thu, 26 May 2011 20:18:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791412#M2728</guid>
      <dc:creator>Bart_Heungens</dc:creator>
      <dc:date>2011-05-26T20:18:44Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791413#M2729</link>
      <description>Under SANiQ 9.0 you only require 3 nodes for implementing network RAID 5 and 5 nodes for n/w RAID 6&lt;BR /&gt;&lt;BR /&gt;I would recommend n/w RAID 5 only is your rate of change of the volume is 0-10%/day&lt;BR /&gt;Mostly Archival volumes, Large File shares, Remote Copy in DR Sites.&lt;BR /&gt;Volumes are mostly Read.&lt;BR /&gt;&lt;BR /&gt;It delivers approximately 60% utilization of raw disk capacity assuming disk RAID 5 in storage nodes plus Network RAID 5 across nodes.&lt;BR /&gt;</description>
      <pubDate>Fri, 27 May 2011 02:21:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791413#M2729</guid>
      <dc:creator>Jitun</dc:creator>
      <dc:date>2011-05-27T02:21:53Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791414#M2730</link>
      <description>ok , &lt;BR /&gt;I will take screenshots asap.&lt;BR /&gt;Yes the cluster is standard .&lt;BR /&gt;do you have some documents or white papers about block distribution for Raid5 ...&lt;BR /&gt;&lt;BR /&gt;thank you&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 27 May 2011 04:26:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791414#M2730</guid>
      <dc:creator>Danel_1</dc:creator>
      <dc:date>2011-05-27T04:26:41Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791415#M2731</link>
      <description>I have some screenshots from the HP training material which I am not allowed to shate unfortenatelly...&lt;BR /&gt;But it is simpel: &lt;BR /&gt;RAID 5 is 3 data blocks and &amp;amp; parity block (parity blocks spread accross the 4 nodes)&lt;BR /&gt;RAID 6 is 4 data blocks and 2 parity blocks&lt;BR /&gt;(parity blocks spread accross the 6 nodes)&lt;BR /&gt;&lt;BR /&gt;I'll see if I can find public documents with the drawings...</description>
      <pubDate>Fri, 27 May 2011 20:29:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791415#M2731</guid>
      <dc:creator>Bart_Heungens</dc:creator>
      <dc:date>2011-05-27T20:29:59Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791416#M2732</link>
      <description>Hi Bart ,&lt;BR /&gt;&lt;BR /&gt;can you to send me the internal pointer&lt;BR /&gt;didier.danel@hp.com&lt;BR /&gt;&lt;BR /&gt;Thank you</description>
      <pubDate>Sat, 28 May 2011 03:38:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791416#M2732</guid>
      <dc:creator>Danel_1</dc:creator>
      <dc:date>2011-05-28T03:38:49Z</dc:date>
    </item>
    <item>
      <title>Re: Raid 5 Question ?</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791417#M2733</link>
      <description>Sent u the email.&lt;BR /&gt;Hope it helps.</description>
      <pubDate>Thu, 02 Jun 2011 07:47:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/raid-5-question/m-p/4791417#M2733</guid>
      <dc:creator>Jitun</dc:creator>
      <dc:date>2011-06-02T07:47:30Z</dc:date>
    </item>
  </channel>
</rss>

