<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: c7000 bl680c hang in ProLiant Servers (ML,DL,SL)</title>
    <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229621#M121752</link>
    <description>Through HP and VMware support we have found out that cisco switch (MDS9124) failed due to cisco field notice cscsu80534. Although in our case it did not reboot but stayed down. Prior to this the ESX hosts had dropped the paths to the SAN via the second data switch for some reason as yet unknown. So when this switch went down the ESX hosts had no access to storage. This caused all VMware guest to fail until the ESX hosts were rebooted.</description>
    <pubDate>Mon, 26 Apr 2010 10:25:07 GMT</pubDate>
    <dc:creator>Mel Nugent</dc:creator>
    <dc:date>2010-04-26T10:25:07Z</dc:date>
    <item>
      <title>c7000 bl680c hang</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229619#M121750</link>
      <description>Two c7000 blade chassis each with 4x680c blades running ESX 3.5&lt;BR /&gt;All 6 power supplies replaced in both chassis on Tues. All 4 ESX servers in one of the chassis hung on Wed. Coincidence?&lt;BR /&gt;I see this on the console on all 4 esx servers in the chassis that hasn't had a problem "bmc returned incorrect response, expected netfn 5 cmd 27, got netfn 5 cmd 35" also in logs on other that did reboot. Starting to investigate if HP or VMware problem.&lt;BR /&gt;&lt;BR /&gt;Any one had any problem after replacing power suppliies?&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Mar 2010 13:12:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229619#M121750</guid>
      <dc:creator>Mel Nugent</dc:creator>
      <dc:date>2010-03-11T13:12:17Z</dc:date>
    </item>
    <item>
      <title>Re: c7000 bl680c hang</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229620#M121751</link>
      <description>Sorry posted into ML/DL by mistake, maybe mods can move to blade system. thanks</description>
      <pubDate>Thu, 11 Mar 2010 13:16:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229620#M121751</guid>
      <dc:creator>Mel Nugent</dc:creator>
      <dc:date>2010-03-11T13:16:16Z</dc:date>
    </item>
    <item>
      <title>Re: c7000 bl680c hang</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229621#M121752</link>
      <description>Through HP and VMware support we have found out that cisco switch (MDS9124) failed due to cisco field notice cscsu80534. Although in our case it did not reboot but stayed down. Prior to this the ESX hosts had dropped the paths to the SAN via the second data switch for some reason as yet unknown. So when this switch went down the ESX hosts had no access to storage. This caused all VMware guest to fail until the ESX hosts were rebooted.</description>
      <pubDate>Mon, 26 Apr 2010 10:25:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/c7000-bl680c-hang/m-p/5229621#M121752</guid>
      <dc:creator>Mel Nugent</dc:creator>
      <dc:date>2010-04-26T10:25:07Z</dc:date>
    </item>
  </channel>
</rss>

