<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic VC FlexFabric simple failover testing in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369365#M33108</link>
    <description>&lt;P&gt;I'm currently working on a new VC-FF implementation and have hit an unexpected brick wall when attempting to perform basic failover testing to prove to the end customer that the solution is resiliant to failure of a VC module.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The configuration is simple, one enclosure, one pair of VC-FF in bays 1&amp;amp;2, both modules running 3.17&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the purpose of demonstrating failover I've started with a clean new domain and have created a very simple configuration&amp;nbsp;from scratch, with one SUS with a single uplink port from each module (so active/passive), a single network defined and presented to LOM 1a and 2a on several blades.&amp;nbsp; Windows 2008 R2 is installed on the blades and the NIC 1a/2a pair are teamed for NFT with the NCU.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Network connectivity works just as expected with both VC-FF modules online.&amp;nbsp; I can ping between blades within the VC and to/from hosts outside the VC domain.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I power off either VC module then network connectivity breaks in all directions - I can't even ping between blades within the enclosure.&amp;nbsp; I have left it running for 10 minutes or more and connectivity is not restored until the offline VC module is powered back up.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Whilst the module is down, NCU shows the NIC ports failover just as one would expect.&amp;nbsp; If I log onto VCM on the surviving module, the SUS uplink port has also failed over as one would expect.&amp;nbsp; The VCM alerts show the domain as being in a 'minor' degraded state.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As we are using the FlexHBAs for storage I can also confirm that the fibre channel paths failover as expected, so this is only affecting ethernet.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have raised a call with HP Support but would appreciate any other thoughts about this .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards, Neil&lt;/P&gt;</description>
    <pubDate>Wed, 06 Apr 2011 09:33:03 GMT</pubDate>
    <dc:creator>neilburton</dc:creator>
    <dc:date>2011-04-06T09:33:03Z</dc:date>
    <item>
      <title>VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369365#M33108</link>
      <description>&lt;P&gt;I'm currently working on a new VC-FF implementation and have hit an unexpected brick wall when attempting to perform basic failover testing to prove to the end customer that the solution is resiliant to failure of a VC module.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The configuration is simple, one enclosure, one pair of VC-FF in bays 1&amp;amp;2, both modules running 3.17&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the purpose of demonstrating failover I've started with a clean new domain and have created a very simple configuration&amp;nbsp;from scratch, with one SUS with a single uplink port from each module (so active/passive), a single network defined and presented to LOM 1a and 2a on several blades.&amp;nbsp; Windows 2008 R2 is installed on the blades and the NIC 1a/2a pair are teamed for NFT with the NCU.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Network connectivity works just as expected with both VC-FF modules online.&amp;nbsp; I can ping between blades within the VC and to/from hosts outside the VC domain.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If I power off either VC module then network connectivity breaks in all directions - I can't even ping between blades within the enclosure.&amp;nbsp; I have left it running for 10 minutes or more and connectivity is not restored until the offline VC module is powered back up.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Whilst the module is down, NCU shows the NIC ports failover just as one would expect.&amp;nbsp; If I log onto VCM on the surviving module, the SUS uplink port has also failed over as one would expect.&amp;nbsp; The VCM alerts show the domain as being in a 'minor' degraded state.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As we are using the FlexHBAs for storage I can also confirm that the fibre channel paths failover as expected, so this is only affecting ethernet.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have raised a call with HP Support but would appreciate any other thoughts about this .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards, Neil&lt;/P&gt;</description>
      <pubDate>Wed, 06 Apr 2011 09:33:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369365#M33108</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-06T09:33:03Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369521#M33109</link>
      <description>&lt;P&gt;I have asked one of our experts for advice.&lt;/P&gt;</description>
      <pubDate>Wed, 06 Apr 2011 18:54:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369521#M33109</guid>
      <dc:creator>chuckk281</dc:creator>
      <dc:date>2011-04-06T18:54:06Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369597#M33110</link>
      <description>&lt;P&gt;Please provide a more&amp;nbsp;detailed configuration of your VC Network, as well as how the NICs are assigned to the NICs as this certainly is not correct.&amp;nbsp; Also, with both modules up ping the router, then fail-over from the active uplink to the stby link and ping again (or run ping -t to the router), make sure you can get to the router through either uplink.&amp;nbsp; Also, make sure Private Networks and is NOT selected.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any additional info would be helpful.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Steve....&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2011 00:31:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369597#M33110</guid>
      <dc:creator>Stevem</dc:creator>
      <dc:date>2011-04-07T00:31:04Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369599#M33111</link>
      <description>Also, screen shots would be helpful.</description>
      <pubDate>Thu, 07 Apr 2011 00:31:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369599#M33111</guid>
      <dc:creator>Stevem</dc:creator>
      <dc:date>2011-04-07T00:31:25Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369863#M33112</link>
      <description>&lt;P&gt;Guys, thanks for the replies&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;First of all I should say that the FlexFabric environment concerned is on a customer site and I'm not there at the moment, however I can ask guys down there to run tests.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a similar type of config (with respect to SUS / network assignment) running on my own Flex-10 kit and the same failover tests work fine for me.&amp;nbsp; For example if I perform a module reset in OA:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) on my (Flex-10) kit I lose no more than one or two pings to blades, whilst failover takes place, if I reset the module carrying the active SUS uplink.&amp;nbsp; If I reset the module carrying the standby SUS uplink I see no pings lost at all&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2) on the customer (FF) kit if I reset either module all pings are dropped for approximately 70 seconds until the reset module becomes available again&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The only difference between my deployment and the customers (other than the fact that its Flex-10 vs FF) is that I am pinging ESXi hosts and the customer is pinging Windows 2008 R2 hosts.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have asked them to build some ESXi blades meanwhile so we can determine if this is host OS specific - if it is then this points towards an issue with Windows / HP NIC teaming driver rather than VC itself.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I asked the customer to perform a NIC failover by changing the NCU NFT preference yesterday and although a handful of&amp;nbsp;pings were dropped in this process (more than I would expect) the traffic could be made to pass through both modules.&amp;nbsp; Also we could fail the SUS uplinks between the modules.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Interestingly if we power off both VC modules and then bring one of them online, everything works normally.&amp;nbsp; It's only when we are running with two modules - and we remove one of them - that the whole thing crashes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To confirm the NIC / network configuration we have one shared uplink set with active/standby links comprising a 4x10Gbps LACP&amp;nbsp;port group from each module.&amp;nbsp; We have defined one ethernet network, associated it with the SUS, and created a simple server profile with 2xEthernet ports and 2xFCoE ports.&amp;nbsp; The ethernet ports (1a / 2a) are associated with the single ethernet network.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As stated connectivity works exactly as expected both between blades and in/out of the VC domain (pinging to/from gateway for example) and we can fail over server NICs and SUS uplinks to prove that both modueles are passing traffic.&amp;nbsp; The minute we drop a module ALL communication fails.&amp;nbsp; The same test on my seperate Flex-10 environment results in seamless failover.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2011 09:12:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369863#M33112</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-07T09:12:53Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369869#M33113</link>
      <description>Steve - Private networks definitely not selected. Also a SUS uplink failover (invoked by disabling upstream switch ports) is seamless - traffic passes over both links and only 1 ping is lost during uplink failover.</description>
      <pubDate>Thu, 07 Apr 2011 09:10:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369869#M33113</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-07T09:10:51Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369871#M33114</link>
      <description>&lt;P&gt;I have also raised the matter with HP Support but the incident is still being escalated at the moment - I don't appear to have got through to anyone who really understands the problem.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2011 09:13:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369871#M33114</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-07T09:13:40Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369899#M33115</link>
      <description>&lt;P&gt;OK interesting update - this connectivity failure is only affecting Windows blades.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We've done exactly the same failover test on a few blades running ESXi and failover works as expected.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So it must be an issue affected the Windows FlexNIC driver / Network Configuration Utility rather than a Virtual Connect problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Both NIC driver and teaming components were installed from the recently released PSP 8.70 so are up-to-date.&amp;nbsp; I am just checking the FlexNIC firmware levels.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2011 10:23:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369899#M33115</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-07T10:23:45Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369947#M33116</link>
      <description>&lt;P&gt;Guys we've solved the problem, it was a driver/firmware issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I can't confirm the previous versions exactly&amp;nbsp;but I had been assured the blades had been updated with Firmware DVD 9.20 and PSP 8.70&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I identified the following&amp;nbsp;standalone firmware and driver packages and requested that they were installed on the Windows blades - and this has resolved the failover problem entirely.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Combined Windows (FCoE and NIC) Driver Kit&amp;nbsp;-&amp;nbsp;F:2.33.008/N:2.102.517.0 (15 Feb 2011)&lt;/LI&gt;&lt;LI&gt;LOM Firmware image for offline update - 2.102.517.703 (23 Feb 2011)&lt;/LI&gt;&lt;LI&gt;BIOS - System ROM - 2010.12.20 (B) (25 Feb 2011)&lt;/LI&gt;&lt;LI&gt;Firmware - Lights-Out Management - 1.20 (5 Apr 2011)&lt;/LI&gt;&lt;LI&gt;OneCommand Manager Application Kit - 5.0.80.5 (13 Dec 2010)&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 07 Apr 2011 14:04:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369947#M33116</guid>
      <dc:creator>neilburton</dc:creator>
      <dc:date>2011-04-07T14:04:29Z</dc:date>
    </item>
    <item>
      <title>Re: VC FlexFabric simple failover testing</title>
      <link>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369981#M33117</link>
      <description>&lt;P&gt;Great to hear. Just a lot of work.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2011 16:15:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/vc-flexfabric-simple-failover-testing/m-p/2369981#M33117</guid>
      <dc:creator>chuckk281</dc:creator>
      <dc:date>2011-04-07T16:15:17Z</dc:date>
    </item>
  </channel>
</rss>

