<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High Availability and Blades in ProLiant Servers (ML,DL,SL)</title>
    <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478115#M39115</link>
    <description>The blades are very reliable.  We have 9 enclosures with BL20ps and BL40ps.  We are about to put a BL30p in production and are testing a BL35p.  There is a learning curve in getting accustom to them - they are different!&lt;BR /&gt;&lt;BR /&gt;Why would you suspect the failing of the ILO bus bar would stop all the blades from booting?  &lt;BR /&gt;&lt;BR /&gt;The PXE interface, if you are using this to boot, would generally be used.&lt;BR /&gt;&lt;BR /&gt;If your ILO link on the enhanced backplane failed you would still have functional servers.&lt;BR /&gt;&lt;BR /&gt;You could always go with a standard enclosure that doesn't have a common ILO.&lt;BR /&gt;&lt;BR /&gt;Jason</description>
    <pubDate>Thu, 03 Feb 2005 22:06:01 GMT</pubDate>
    <dc:creator>Jason Menear_1</dc:creator>
    <dc:date>2005-02-03T22:06:01Z</dc:date>
    <item>
      <title>High Availability and Blades</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478114#M39114</link>
      <description>Hi&lt;BR /&gt;Anyone have any info on availability of Blade Systems?&lt;BR /&gt;&lt;BR /&gt;The dual power chassis seems nice but I'm concerned about the common ILO bus bar that cuts across all Blades and might stop the whole lot booting if it fails?&lt;BR /&gt;&lt;BR /&gt;Any other views?</description>
      <pubDate>Thu, 03 Feb 2005 16:55:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478114#M39114</guid>
      <dc:creator>Adrian Ogden</dc:creator>
      <dc:date>2005-02-03T16:55:01Z</dc:date>
    </item>
    <item>
      <title>Re: High Availability and Blades</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478115#M39115</link>
      <description>The blades are very reliable.  We have 9 enclosures with BL20ps and BL40ps.  We are about to put a BL30p in production and are testing a BL35p.  There is a learning curve in getting accustom to them - they are different!&lt;BR /&gt;&lt;BR /&gt;Why would you suspect the failing of the ILO bus bar would stop all the blades from booting?  &lt;BR /&gt;&lt;BR /&gt;The PXE interface, if you are using this to boot, would generally be used.&lt;BR /&gt;&lt;BR /&gt;If your ILO link on the enhanced backplane failed you would still have functional servers.&lt;BR /&gt;&lt;BR /&gt;You could always go with a standard enclosure that doesn't have a common ILO.&lt;BR /&gt;&lt;BR /&gt;Jason</description>
      <pubDate>Thu, 03 Feb 2005 22:06:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478115#M39115</guid>
      <dc:creator>Jason Menear_1</dc:creator>
      <dc:date>2005-02-03T22:06:01Z</dc:date>
    </item>
    <item>
      <title>Re: High Availability and Blades</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478116#M39116</link>
      <description>I was told by HP that without the ILO bus bar they won't boot. But I don't have a Blade system to try it and understand what the real implications are. Also it appears to be a component that does interconnect all the  blades electrically and thus a fault on one blade could interfer with the operation of the others unless good electrical isolation is used. &lt;BR /&gt;&lt;BR /&gt;In terms of general reliability I don't see that Blades would be any more reliable than their 1U servers for example. Its the small high speed fans that are the weakest link in these systems even with redundancy its not necessarily the best solution for very high availability environments. &lt;BR /&gt;&lt;BR /&gt;Thanks for your feedback</description>
      <pubDate>Fri, 04 Feb 2005 04:22:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478116#M39116</guid>
      <dc:creator>Adrian Ogden</dc:creator>
      <dc:date>2005-02-04T04:22:09Z</dc:date>
    </item>
    <item>
      <title>Re: High Availability and Blades</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478117#M39117</link>
      <description>If the server managment module fails then a blade that is plugged in will not automaticaly power up. At this point the blade can be forced to start up using the manual override feature (holding down teh pwoer button fro 5 seconds). A running blade will continue to run. Also the server management module is hot swappable.</description>
      <pubDate>Thu, 10 Feb 2005 16:56:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478117#M39117</guid>
      <dc:creator>Dan Bil</dc:creator>
      <dc:date>2005-02-10T16:56:55Z</dc:date>
    </item>
    <item>
      <title>Re: High Availability and Blades</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478118#M39118</link>
      <description>Thats useful to know. &lt;BR /&gt;&lt;BR /&gt;The electical isolation remains my last issue.</description>
      <pubDate>Fri, 11 Feb 2005 04:28:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/high-availability-and-blades/m-p/3478118#M39118</guid>
      <dc:creator>Adrian Ogden</dc:creator>
      <dc:date>2005-02-11T04:28:29Z</dc:date>
    </item>
  </channel>
</rss>

