<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic rx8620 pci power module failure in Integrity Servers</title>
    <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497320#M5553</link>
    <description>If this fails should the system run in a degraded state?  Is this a single point of failure piece of hardware?&lt;BR /&gt;&lt;BR /&gt;If it should/can run in a degraded state is there a configuration that needs to be put in place?&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 14 Sep 2009 21:45:59 GMT</pubDate>
    <dc:creator>Alex_R_1</dc:creator>
    <dc:date>2009-09-14T21:45:59Z</dc:date>
    <item>
      <title>rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497320#M5553</link>
      <description>If this fails should the system run in a degraded state?  Is this a single point of failure piece of hardware?&lt;BR /&gt;&lt;BR /&gt;If it should/can run in a degraded state is there a configuration that needs to be put in place?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Sep 2009 21:45:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497320#M5553</guid>
      <dc:creator>Alex_R_1</dc:creator>
      <dc:date>2009-09-14T21:45:59Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497321#M5554</link>
      <description>Look at the quickspecs and read the "AC/DC Power" section, "PCI Power Supplies" paragraph. It appears to be a SPoF for the I/O bay. &lt;A href="http://h18000.www1.hp.com/products/quickspecs/11849_div/11849_div.HTML#Technical%20Specifications" target="_blank"&gt;http://h18000.www1.hp.com/products/quickspecs/11849_div/11849_div.HTML#Technical%20Specifications&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Sep 2009 23:33:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497321#M5554</guid>
      <dc:creator>TTr</dc:creator>
      <dc:date>2009-09-14T23:33:08Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497322#M5555</link>
      <description>Thanks, that is exactly what I needed.  Looks like I need to look in to having Service Guard not only fail over between nodes but also between nPar's.</description>
      <pubDate>Tue, 15 Sep 2009 15:01:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497322#M5555</guid>
      <dc:creator>Alex_R_1</dc:creator>
      <dc:date>2009-09-15T15:01:59Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497323#M5556</link>
      <description>Serviceguard between nPars is a bad idea ... consider to use independent systems.</description>
      <pubDate>Tue, 15 Sep 2009 17:36:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497323#M5556</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-15T17:36:29Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497324#M5557</link>
      <description>Why do you say that?  Do you have any reading material to support the statement?  Have you had real life experiance with it?</description>
      <pubDate>Tue, 15 Sep 2009 19:54:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497324#M5557</guid>
      <dc:creator>Alex_R_1</dc:creator>
      <dc:date>2009-09-15T19:54:53Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497325#M5558</link>
      <description>A cluster in a single cabinet is not high available.&lt;BR /&gt;&lt;BR /&gt;Think about planned or not planned events like hardware failures, maintenance, air condition and power loss, etc. etc. ...</description>
      <pubDate>Wed, 16 Sep 2009 06:28:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497325#M5558</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-16T06:28:00Z</dc:date>
    </item>
    <item>
      <title>Re: rx8620 pci power module failure</title>
      <link>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497326#M5559</link>
      <description>I agree with you statement.  The cluster will not be removed.  But the system should have failed to a degraded state.&lt;BR /&gt;&lt;BR /&gt;Service Guard did fail over properly to the other node.</description>
      <pubDate>Wed, 16 Sep 2009 15:15:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/integrity-servers/rx8620-pci-power-module-failure/m-p/4497326#M5559</guid>
      <dc:creator>Alex_R_1</dc:creator>
      <dc:date>2009-09-16T15:15:25Z</dc:date>
    </item>
  </channel>
</rss>

