<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: 8 Node cluster with 5 managers in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7068006#M12184</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1994389"&gt;@aur&lt;/a&gt; ,&lt;/P&gt;&lt;P&gt;Can you please let me know how you fixed this problem. it'll be helpful. Thanks&lt;/P&gt;</description>
    <pubDate>Wed, 30 Oct 2019 06:04:25 GMT</pubDate>
    <dc:creator>techin</dc:creator>
    <dc:date>2019-10-30T06:04:25Z</dc:date>
    <item>
      <title>8 Node cluster with 5 managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7067930#M12182</link>
      <description>&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;I'm trying to determine the theory behind how many nodes I can lose in my cluster and still have the data available. I currently have the following:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;8 nodes in a single cluster&lt;/P&gt;&lt;P&gt;5 managers are in use&lt;/P&gt;&lt;P&gt;Volumes are configured in Network RAID10.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any ideas how many nodes I could lose in the cluster before the data/volumes are offline? I always thought that if 2 or more node fail, they will go offline.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Tue, 29 Oct 2019 11:27:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7067930#M12182</guid>
      <dc:creator>aur</dc:creator>
      <dc:date>2019-10-29T11:27:16Z</dc:date>
    </item>
    <item>
      <title>Re: 8 Node cluster with 5 managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7067953#M12183</link>
      <description>&lt;P&gt;Don't worry I've managed to work it out &lt;LI-EMOJI id="lia_slightly-smiling-face" title=":slightly_smiling_face:"&gt;&lt;/LI-EMOJI&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 29 Oct 2019 14:26:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7067953#M12183</guid>
      <dc:creator>aur</dc:creator>
      <dc:date>2019-10-29T14:26:55Z</dc:date>
    </item>
    <item>
      <title>Re: 8 Node cluster with 5 managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7068006#M12184</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1994389"&gt;@aur&lt;/a&gt; ,&lt;/P&gt;&lt;P&gt;Can you please let me know how you fixed this problem. it'll be helpful. Thanks&lt;/P&gt;</description>
      <pubDate>Wed, 30 Oct 2019 06:04:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7068006#M12184</guid>
      <dc:creator>techin</dc:creator>
      <dc:date>2019-10-30T06:04:25Z</dc:date>
    </item>
    <item>
      <title>Re: 8 Node cluster with 5 managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7068027#M12186</link>
      <description>&lt;P&gt;No problem,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you have 8 nodes, you can have all of the even nodes or all of the odd nodes fail without issue. The issue comes when you have a node that fails alongside another.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1,3,5,7 can fail and the volumes will still be online&lt;/P&gt;&lt;P&gt;2,4,6,8 can fail and the volumes will still be online&lt;/P&gt;&lt;P&gt;1,2 or 3,4 or 4,5 ...etc fail and the volumes will go offline, for example.&lt;/P&gt;&lt;P&gt;So when the documentation states half of the cluster can go offline, it is true, but it really depends which nodes are offline (obviously this is RAID10 and not RAID10+1 / RAID10+2).&lt;/P&gt;&lt;P&gt;Hope this helps someone out there.&lt;/P&gt;</description>
      <pubDate>Wed, 30 Oct 2019 08:11:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/8-node-cluster-with-5-managers/m-p/7068027#M12186</guid>
      <dc:creator>aur</dc:creator>
      <dc:date>2019-10-30T08:11:22Z</dc:date>
    </item>
  </channel>
</rss>

