<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Question understanding optimal configuration for Managers in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5710883#M5406</link>
    <description>&lt;P&gt;Hi Ralf,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you run just 1 manager on each site, you don't have this problem. Because then your quorum is 2. When a node without a manager fails this doesn't affect the quorum. So 1 site can fail completly and&amp;nbsp;the&amp;nbsp;node without&amp;nbsp;the manager on the remaining site can fail and you still have a quorum of 2.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="UserName lia-user-name"&gt;&lt;SPAN class="login-bold"&gt;&lt;SPAN class="UserName lia-user-name"&gt;Kind regards,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Joris Vliegen&lt;/P&gt;&lt;P&gt;______________________________________________&lt;/P&gt;&lt;P&gt;If my post was useful, click on my KUDOS, the&amp;nbsp;"White Star".&lt;/P&gt;</description>
    <pubDate>Wed, 04 Jul 2012 13:42:22 GMT</pubDate>
    <dc:creator>Joris Vliegen</dc:creator>
    <dc:date>2012-07-04T13:42:22Z</dc:date>
    <item>
      <title>Question understanding optimal configuration for Managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5710819#M5405</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I have a question concerning the optimal configuration for installing Managers on a system:&lt;/P&gt;&lt;P&gt;Supposed I have a Multi-Site SAN system with 4 nodes, split into two DC. On each node I have the Manager running. Additionally, I have a Fail-Over Manager running in a (virtual) third side. Volumes are created in a NRAID 10+2 configuration.&lt;/P&gt;&lt;P&gt;So, in total I have 5 Managers running, quorum is 3.&lt;/P&gt;&lt;P&gt;For a 10+2 configuration, it is said that we have site protection&amp;nbsp;and fault tolerance in the remaining site, which means that from the two nodes in the remaining side one additional node could fail, while I'm having still access to my data.&lt;/P&gt;&lt;P&gt;So, if one of the the sites goes down, I lost 2 Managers. No problem, quorum is still fulfilled.&lt;/P&gt;&lt;P&gt;If now in the remaining site another node goes down, I lose another Manager, only two managers remain, no quorum =&amp;gt; access to data is lost!&lt;/P&gt;&lt;P&gt;So how is "fault tolerance in the remaining site" given?&lt;/P&gt;&lt;P&gt;How can I setup for optimal availability in a Multi-Site SAN environment with 4 nodes?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ralf&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jul 2012 12:46:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5710819#M5405</guid>
      <dc:creator>Ralf Gerresheim</dc:creator>
      <dc:date>2012-07-04T12:46:14Z</dc:date>
    </item>
    <item>
      <title>Re: Question understanding optimal configuration for Managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5710883#M5406</link>
      <description>&lt;P&gt;Hi Ralf,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you run just 1 manager on each site, you don't have this problem. Because then your quorum is 2. When a node without a manager fails this doesn't affect the quorum. So 1 site can fail completly and&amp;nbsp;the&amp;nbsp;node without&amp;nbsp;the manager on the remaining site can fail and you still have a quorum of 2.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="UserName lia-user-name"&gt;&lt;SPAN class="login-bold"&gt;&lt;SPAN class="UserName lia-user-name"&gt;Kind regards,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Joris Vliegen&lt;/P&gt;&lt;P&gt;______________________________________________&lt;/P&gt;&lt;P&gt;If my post was useful, click on my KUDOS, the&amp;nbsp;"White Star".&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jul 2012 13:42:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5710883#M5406</guid>
      <dc:creator>Joris Vliegen</dc:creator>
      <dc:date>2012-07-04T13:42:22Z</dc:date>
    </item>
    <item>
      <title>Re: Question understanding optimal configuration for Managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5712161#M5409</link>
      <description>&lt;P&gt;Hi Joris,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thanks for your suggestions.&lt;/P&gt;&lt;P&gt;I tested this configuration, but doesn't work completely:&lt;/P&gt;&lt;P&gt;&amp;nbsp;If I lose one DC: no problem.&lt;/P&gt;&lt;P&gt;If I lose the storage node in the remaining DC that doesn't have the manager installed, I have access to the last node.&lt;/P&gt;&lt;P&gt;But, if I lose the storage node that has the manager installed, CMC tell me, that I will lose the quorum and access to the data.&lt;/P&gt;&lt;P&gt;So, I made test: After the DC1 is going down, I started the manager on the second storage node in the remaining DC.&lt;/P&gt;&lt;P&gt;After this, I thought, I can lose any of the remaining two nodes. But, CMC now tells me that I will lose the quorum, independently which node I should lose (BTW: I do this tests in a virtual environment with 4 VSA installed and 'lose' means that I shutdown the nodes either via CMC or vCenter). If I stop the manager again, I can lose that node and&amp;nbsp;still having access to the data via the node that originally has the manager running.&lt;/P&gt;&lt;P&gt;I can't explain that behavior.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you explain that behavior?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jul 2012 09:50:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5712161#M5409</guid>
      <dc:creator>Ralf Gerresheim</dc:creator>
      <dc:date>2012-07-05T09:50:48Z</dc:date>
    </item>
    <item>
      <title>Re: Question understanding optimal configuration for Managers</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5712579#M5414</link>
      <description>&lt;P&gt;you will never be able to maintain quorum across two sites when the site with the tie-breaker goes down.&amp;nbsp; This will always require some sort of manual&amp;nbsp;failover.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The only way I could think of getting around this is somehow using another independent HA solution to host the FOM so that the FOM seamlessly fails over to the 2nd site when the first one goes down.&amp;nbsp; Maybe there is an option w/ VMware's FT or maybe something with&amp;nbsp;a new windows server2012 shared nothing HA solution.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The problem with a pure two site solution and automation is that how do you really know the primary site is actually down and not that you just lost communcation with it.&amp;nbsp; Split-brain is a real issue and only really dealt with correctly if you have a 3rd site for the FOM.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jul 2012 14:00:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/question-understanding-optimal-configuration-for-managers/m-p/5712579#M5414</guid>
      <dc:creator>oikjn</dc:creator>
      <dc:date>2012-07-05T14:00:41Z</dc:date>
    </item>
  </channel>
</rss>

