<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Rehat AS3 Update 6 Cluster suite in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714156#M21422</link>
    <description>I suggest you re-check cluster configuration according to RH doc &lt;A href="http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/ch-software.html" target="_blank"&gt;http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/ch-software.html&lt;/A&gt;&lt;BR /&gt;(Chapter 3)&lt;BR /&gt;Rgds,&lt;BR /&gt;Vitaly</description>
    <pubDate>Sun, 22 Jan 2006 07:03:47 GMT</pubDate>
    <dc:creator>Vitaly Karasik_1</dc:creator>
    <dc:date>2006-01-22T07:03:47Z</dc:date>
    <item>
      <title>Rehat AS3 Update 6 Cluster suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714153#M21419</link>
      <description>I am running redhat cluster suite on 2 dl380 servers.  when running clustat one member remains fairly stable but the second member switches from active to inactive every 5 seconds or so.  I have bonded my two nics together and enable 8021q trunk so I have my cluster traffic on bond0.13.  I can always ping each cluster member.  This is the error i get in the messages file&lt;BR /&gt;&lt;BR /&gt;Jan 20 15:49:56 ralph clusvcmgrd[4311]: &lt;INFO&gt; State change: huey-c UP &lt;BR /&gt;Jan 20 15:49:57 ralph clumembd[4144]: &lt;NOTICE&gt; Member huey-c DOWN &lt;BR /&gt;Jan 20 15:49:58 ralph clumembd[4144]: &lt;INFO&gt; Membership View #7350:0x00000001 &lt;BR /&gt;Jan 20 15:49:59 ralph cluquorumd[4119]: &lt;WARNING&gt; --&amp;gt; Commencing STONITH &amp;lt;-- &lt;BR /&gt;Jan 20 15:49:59 ralph cluquorumd[4119]: &lt;WARNING&gt; STONITH: Falsely claiming that&lt;BR /&gt; huey-c has been fenced &lt;BR /&gt;Jan 20 15:49:59 ralph cluquorumd[4119]: &lt;CRIT&gt; STONITH: Data integrity may be co&lt;BR /&gt;mpromised! &lt;BR /&gt;Jan 20 15:50:00 ralph clusvcmgrd[4311]: &lt;INFO&gt; Quorum Event: View #12657 0x00000&lt;BR /&gt;001 &lt;BR /&gt;Jan 20 15:50:00 ralph clusvcmgrd[4311]: &lt;INFO&gt; State change: huey-c DOWN &lt;BR /&gt;Jan 20 15:50:08 ralph clumembd[4144]: &lt;NOTICE&gt; Member huey-c UP &lt;BR /&gt;Jan 20 15:50:12 ralph clumembd[4144]: &lt;NOTICE&gt; Member huey-c DOWN &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/NOTICE&gt;&lt;/NOTICE&gt;&lt;/INFO&gt;&lt;/INFO&gt;&lt;/CRIT&gt;&lt;/WARNING&gt;&lt;/WARNING&gt;&lt;/INFO&gt;&lt;/NOTICE&gt;&lt;/INFO&gt;</description>
      <pubDate>Fri, 20 Jan 2006 10:50:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714153#M21419</guid>
      <dc:creator>Mike Hedderly</dc:creator>
      <dc:date>2006-01-20T10:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: Rehat AS3 Update 6 Cluster suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714154#M21420</link>
      <description>some further information.  I am running clumanager-1.2.28-1 and redhat-config-cluster-1.0.8-1&lt;BR /&gt;&lt;BR /&gt;I do not have any power switches and the external disks are on an MSA100 via a fibre chanel.</description>
      <pubDate>Fri, 20 Jan 2006 11:04:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714154#M21420</guid>
      <dc:creator>Mike Hedderly</dc:creator>
      <dc:date>2006-01-20T11:04:56Z</dc:date>
    </item>
    <item>
      <title>Re: Rehat AS3 Update 6 Cluster suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714155#M21421</link>
      <description>Shalom Mike,&lt;BR /&gt;&lt;BR /&gt;I don't think you've fully configured the cluster.&lt;BR /&gt;&lt;BR /&gt;STONITH: Falsely claiming that&lt;BR /&gt;huey-c has been fenced &lt;BR /&gt;&lt;BR /&gt;Shoot&lt;BR /&gt;The &lt;BR /&gt;Other&lt;BR /&gt;Node&lt;BR /&gt;In &lt;BR /&gt;The&lt;BR /&gt;Head&lt;BR /&gt;&lt;BR /&gt;Its trying to shut down the other node becasue it thinks its down or there is a risk of data corruption.&lt;BR /&gt;&lt;BR /&gt;Checklist:&lt;BR /&gt;MSA1000 firmware up to date&lt;BR /&gt;sansurfer package on both servers to check the state of shared storage&lt;BR /&gt;shared storage is configured so the sd# devices are the same on both nodes.&lt;BR /&gt;Firmware on the qlogic cards is the same on all cards, all servers and reasonably up to date.&lt;BR /&gt;Cluster configuration files.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Sat, 21 Jan 2006 16:29:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714155#M21421</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-01-21T16:29:35Z</dc:date>
    </item>
    <item>
      <title>Re: Rehat AS3 Update 6 Cluster suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714156#M21422</link>
      <description>I suggest you re-check cluster configuration according to RH doc &lt;A href="http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/ch-software.html" target="_blank"&gt;http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/cluster-suite/ch-software.html&lt;/A&gt;&lt;BR /&gt;(Chapter 3)&lt;BR /&gt;Rgds,&lt;BR /&gt;Vitaly</description>
      <pubDate>Sun, 22 Jan 2006 07:03:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714156#M21422</guid>
      <dc:creator>Vitaly Karasik_1</dc:creator>
      <dc:date>2006-01-22T07:03:47Z</dc:date>
    </item>
    <item>
      <title>Re: Rehat AS3 Update 6 Cluster suite</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714157#M21423</link>
      <description>&lt;BR /&gt;Thanks for the advise but we found the problem.  The STONITH errors were a red herring.  This cluster has no Power Switches so its not possible to STONITH a node that the cluster perceives has changed to a "down" state.&lt;BR /&gt;&lt;BR /&gt;The cause of "huey" dropping in and out of the cluster every  few seconds turned out to be a clash between two Redhat clusters using the same 255.0.0.11 multicast address elsewhere on the same network.  We changed the multicast address to be unique, reloaded the config, restarted the cluster and the problem has gone away.  The cluster is stable now.&lt;BR /&gt;&lt;BR /&gt;Would have been nice for Redhat to have reported this somewhere.  We only discovered what was going on after pinging the multicast address and seeing more DUP responses than we were expecting and from IP addresses belonging to the other Redhat cluster.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 22 Jan 2006 07:27:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rehat-as3-update-6-cluster-suite/m-p/3714157#M21423</guid>
      <dc:creator>John McNulty_2</dc:creator>
      <dc:date>2006-01-22T07:27:35Z</dc:date>
    </item>
  </channel>
</rss>

