<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Serious Enclosure (stack split brain) / Enclosure loss problem in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/serious-enclosure-stack-split-brain-enclosure-loss-problem/m-p/2302161#M33320</link>
    <description>&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Bela was having a network issue and needed some advice:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;****************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We detected serious problems for loss of all network connections externally to the blades, when &lt;BR /&gt;ENC1/VC1 and ENC2/VC2 was simulated to broken ( unplugged ) -&amp;gt; loss of all VC stacking links between the enclosures, &lt;BR /&gt;or get the same problem when the 1st enclosure has loss of all power. &lt;BR /&gt;&lt;BR /&gt;In this case all of the still running blades lost external uplinks. &lt;BR /&gt;&lt;BR /&gt;Please give us urgent answer, as we are currently not known is multi-enclosure VC domains has bug, or this is a standrard &amp;nbsp;split-brain &lt;BR /&gt;prevention behaviour, or multi-enclosure config was not designed well for deploying in separated server rooms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*****************************8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Vincent replied after looking at their configuration:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*********************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you power off Enc1/VC1 AND Enc2/VC2, you’re powering off one end of each of the 2 stacking links in the drawing so of course you’re losing both stacking links. If you want to protect against such a double failure, you’d need to put more stacking links.&lt;/P&gt;
&lt;P&gt;As to server communication to the outside, we’d need to know how the servers are configured, whether their profiles have connections to both uplink sets. A VC show all could help.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;**************************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Lajos was also involved and replied:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*******************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi Vincent,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There is good news. The VC firmware upgrade to 3.10 solved the uplink failure problem.&lt;/P&gt;
&lt;P&gt;Now the customer can power off any enclosure. Network traffic on surviving blade servers not impacted by loss of stacking links.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*******************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Always good news when things are finally working!&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 25 Sep 2010 19:28:56 GMT</pubDate>
    <dc:creator>chuckk281</dc:creator>
    <dc:date>2010-09-25T19:28:56Z</dc:date>
    <item>
      <title>Serious Enclosure (stack split brain) / Enclosure loss problem</title>
      <link>https://community.hpe.com/t5/bladesystem-general/serious-enclosure-stack-split-brain-enclosure-loss-problem/m-p/2302161#M33320</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Bela was having a network issue and needed some advice:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;****************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We detected serious problems for loss of all network connections externally to the blades, when &lt;BR /&gt;ENC1/VC1 and ENC2/VC2 was simulated to broken ( unplugged ) -&amp;gt; loss of all VC stacking links between the enclosures, &lt;BR /&gt;or get the same problem when the 1st enclosure has loss of all power. &lt;BR /&gt;&lt;BR /&gt;In this case all of the still running blades lost external uplinks. &lt;BR /&gt;&lt;BR /&gt;Please give us urgent answer, as we are currently not known is multi-enclosure VC domains has bug, or this is a standrard &amp;nbsp;split-brain &lt;BR /&gt;prevention behaviour, or multi-enclosure config was not designed well for deploying in separated server rooms.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*****************************8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#000000"&gt;Vincent replied after looking at their configuration:&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*********************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you power off Enc1/VC1 AND Enc2/VC2, you’re powering off one end of each of the 2 stacking links in the drawing so of course you’re losing both stacking links. If you want to protect against such a double failure, you’d need to put more stacking links.&lt;/P&gt;
&lt;P&gt;As to server communication to the outside, we’d need to know how the servers are configured, whether their profiles have connections to both uplink sets. A VC show all could help.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;**************************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Lajos was also involved and replied:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*******************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi Vincent,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;There is good news. The VC firmware upgrade to 3.10 solved the uplink failure problem.&lt;/P&gt;
&lt;P&gt;Now the customer can power off any enclosure. Network traffic on surviving blade servers not impacted by loss of stacking links.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*******************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;FONT color="#000000"&gt;Always good news when things are finally working!&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 25 Sep 2010 19:28:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/serious-enclosure-stack-split-brain-enclosure-loss-problem/m-p/2302161#M33320</guid>
      <dc:creator>chuckk281</dc:creator>
      <dc:date>2010-09-25T19:28:56Z</dc:date>
    </item>
  </channel>
</rss>

