<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Broke It - Help! in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/broke-it-help/m-p/4696069#M1623</link>
    <description>OK bit of a drastic subject line and fortunately it's an eval unit, but today we managed to do something we didn't expect.&lt;BR /&gt;&lt;BR /&gt;Two P4500 12 disk nodes in a single cluster connected to the same switch.&lt;BR /&gt;&lt;BR /&gt;We'd been pulling various cables/drives to simulate failure modes and with the FOM installed and running we were blown away at the whole "5 pings lost and it's back" thing when you pull a node completely.&lt;BR /&gt;&lt;BR /&gt;However, then we pulled "Node 2" (we just pulled both NICs for speed) and the group IP was no longer pingable.&lt;BR /&gt;&lt;BR /&gt;Upon investigation it would seem that what happened was that on "Node 1", the manager had somehow got stopped, so with us losing Node 2 deliberately all we now had left working was Node 1 with no manager, and a FOM.&lt;BR /&gt;&lt;BR /&gt;We just could not connect to Node 1 as (I think) you connect to a management group by the group IP, which was dead, so I was in a chicken and egg where (I think) I knew I needed to start the manager on Node 1, but I couldn't log onto the group IP to do so because two managers were down so no quorum.&lt;BR /&gt;&lt;BR /&gt;The solution was to reconnect the NICs to the second node so the quorum came back, and to then start the manager on the first node - but in a full on disaster we may not have that luxury.&lt;BR /&gt;&lt;BR /&gt;What have I missed?</description>
    <pubDate>Wed, 06 Oct 2010 16:24:01 GMT</pubDate>
    <dc:creator>Paul Hutchings</dc:creator>
    <dc:date>2010-10-06T16:24:01Z</dc:date>
    <item>
      <title>Broke It - Help!</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/broke-it-help/m-p/4696069#M1623</link>
      <description>OK bit of a drastic subject line and fortunately it's an eval unit, but today we managed to do something we didn't expect.&lt;BR /&gt;&lt;BR /&gt;Two P4500 12 disk nodes in a single cluster connected to the same switch.&lt;BR /&gt;&lt;BR /&gt;We'd been pulling various cables/drives to simulate failure modes and with the FOM installed and running we were blown away at the whole "5 pings lost and it's back" thing when you pull a node completely.&lt;BR /&gt;&lt;BR /&gt;However, then we pulled "Node 2" (we just pulled both NICs for speed) and the group IP was no longer pingable.&lt;BR /&gt;&lt;BR /&gt;Upon investigation it would seem that what happened was that on "Node 1", the manager had somehow got stopped, so with us losing Node 2 deliberately all we now had left working was Node 1 with no manager, and a FOM.&lt;BR /&gt;&lt;BR /&gt;We just could not connect to Node 1 as (I think) you connect to a management group by the group IP, which was dead, so I was in a chicken and egg where (I think) I knew I needed to start the manager on Node 1, but I couldn't log onto the group IP to do so because two managers were down so no quorum.&lt;BR /&gt;&lt;BR /&gt;The solution was to reconnect the NICs to the second node so the quorum came back, and to then start the manager on the first node - but in a full on disaster we may not have that luxury.&lt;BR /&gt;&lt;BR /&gt;What have I missed?</description>
      <pubDate>Wed, 06 Oct 2010 16:24:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/broke-it-help/m-p/4696069#M1623</guid>
      <dc:creator>Paul Hutchings</dc:creator>
      <dc:date>2010-10-06T16:24:01Z</dc:date>
    </item>
  </channel>
</rss>

