<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: heartbeat status reporting down? in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413509#M56305</link>
    <description>and here is the other cluster, and I know the package is down&lt;BR /&gt;&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER         STATUS       &lt;BR /&gt;cache_cluster   up           &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  lnx-oradb01    up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdb1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      up               eth2         &lt;BR /&gt;    PRIMARY      up               eth3         &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  lnx-oradb02    up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdq1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      up               eth2         &lt;BR /&gt;    PRIMARY      up               eth3         &lt;BR /&gt;    &lt;BR /&gt;UNOWNED_PACKAGES&lt;BR /&gt;&lt;BR /&gt;    PACKAGE        STATUS        STATE         AUTO_RUN     NODE        &lt;BR /&gt;    cache1p2p      down          halted        disabled     unowned     &lt;BR /&gt;      &lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;      &lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   NODE_NAME      NAME&lt;BR /&gt;      Subnet     up       lnx-oradb01    10.10.132.0&lt;BR /&gt;      Subnet     up       lnx-oradb02    10.10.132.0&lt;BR /&gt;      &lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME &lt;BR /&gt;      Primary      up           enabled      lnx-oradb01 &lt;BR /&gt;      Alternate    up           enabled      lnx-oradb02 &lt;BR /&gt;</description>
    <pubDate>Tue, 05 May 2009 19:08:07 GMT</pubDate>
    <dc:creator>C Lamb</dc:creator>
    <dc:date>2009-05-05T19:08:07Z</dc:date>
    <item>
      <title>heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413506#M56302</link>
      <description>I havent seen this in the Forum so I thought I'd ask. We have built several 2 node clusters and are seeing something strange. on several of our fully functional clusters the cmviewcl -v is reporting the status of both heartbeat interfaces as down. The clusters are fine. why is this? shouldn't they report as up? they do on some of the other clusters.</description>
      <pubDate>Tue, 05 May 2009 15:31:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413506#M56302</guid>
      <dc:creator>C Lamb</dc:creator>
      <dc:date>2009-05-05T15:31:30Z</dc:date>
    </item>
    <item>
      <title>Re: heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413507#M56303</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Network collision/congestion can cause this condition.&lt;BR /&gt;&lt;BR /&gt;Can I actually see the cmviewcl -v output.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 05 May 2009 19:03:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413507#M56303</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-05-05T19:03:39Z</dc:date>
    </item>
    <item>
      <title>Re: heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413508#M56304</link>
      <description>Here's the output, &lt;BR /&gt;[root@maildb01a maildb01]# cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER        STATUS       &lt;BR /&gt;maildb01       up           &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  maildb01a      up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdj1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      down             eth4         &lt;BR /&gt;    PRIMARY      down             eth2         &lt;BR /&gt;&lt;BR /&gt;    PACKAGE        STATUS        STATE         AUTO_RUN     NODE        &lt;BR /&gt;    maildb01       up            running       enabled      maildb01a   &lt;BR /&gt;      &lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;      &lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   MAX_RESTARTS  RESTARTS NAME&lt;BR /&gt;      Subnet     up                              10.10.120.0&lt;BR /&gt;      &lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME &lt;BR /&gt;      Primary      up           enabled      maildb01a (current)&lt;BR /&gt;      Alternate    up           enabled      maildb01b &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  maildb01b      up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdj1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      down             eth4         &lt;BR /&gt;    PRIMARY      down             eth2         &lt;BR /&gt;[root@maildb01a maildb01]# &lt;BR /&gt;</description>
      <pubDate>Tue, 05 May 2009 19:05:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413508#M56304</guid>
      <dc:creator>C Lamb</dc:creator>
      <dc:date>2009-05-05T19:05:50Z</dc:date>
    </item>
    <item>
      <title>Re: heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413509#M56305</link>
      <description>and here is the other cluster, and I know the package is down&lt;BR /&gt;&lt;BR /&gt;cmviewcl -v&lt;BR /&gt;&lt;BR /&gt;CLUSTER         STATUS       &lt;BR /&gt;cache_cluster   up           &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  lnx-oradb01    up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdb1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      up               eth2         &lt;BR /&gt;    PRIMARY      up               eth3         &lt;BR /&gt;  &lt;BR /&gt;  NODE           STATUS       STATE        &lt;BR /&gt;  lnx-oradb02    up           running      &lt;BR /&gt;    &lt;BR /&gt;    Cluster_Lock_LUN:&lt;BR /&gt;    DEVICE                STATUS              &lt;BR /&gt;    /dev/sdq1             up                  &lt;BR /&gt;    &lt;BR /&gt;    Network_Parameters:&lt;BR /&gt;    INTERFACE    STATUS           NAME         &lt;BR /&gt;    PRIMARY      up               eth0         &lt;BR /&gt;    PRIMARY      up               eth2         &lt;BR /&gt;    PRIMARY      up               eth3         &lt;BR /&gt;    &lt;BR /&gt;UNOWNED_PACKAGES&lt;BR /&gt;&lt;BR /&gt;    PACKAGE        STATUS        STATE         AUTO_RUN     NODE        &lt;BR /&gt;    cache1p2p      down          halted        disabled     unowned     &lt;BR /&gt;      &lt;BR /&gt;      Policy_Parameters:&lt;BR /&gt;      POLICY_NAME     CONFIGURED_VALUE&lt;BR /&gt;      Failover        configured_node&lt;BR /&gt;      Failback        manual&lt;BR /&gt;      &lt;BR /&gt;      Script_Parameters:&lt;BR /&gt;      ITEM       STATUS   NODE_NAME      NAME&lt;BR /&gt;      Subnet     up       lnx-oradb01    10.10.132.0&lt;BR /&gt;      Subnet     up       lnx-oradb02    10.10.132.0&lt;BR /&gt;      &lt;BR /&gt;      Node_Switching_Parameters:&lt;BR /&gt;      NODE_TYPE    STATUS       SWITCHING    NAME &lt;BR /&gt;      Primary      up           enabled      lnx-oradb01 &lt;BR /&gt;      Alternate    up           enabled      lnx-oradb02 &lt;BR /&gt;</description>
      <pubDate>Tue, 05 May 2009 19:08:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413509#M56305</guid>
      <dc:creator>C Lamb</dc:creator>
      <dc:date>2009-05-05T19:08:07Z</dc:date>
    </item>
    <item>
      <title>Re: heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413510#M56306</link>
      <description>If you are running RH 5.2 with lans which use the e1000e driver then this will be the cause of the problem. There is a bug in the e1000e driver which causes it to report an incorrect status. The solution here would be to upgrade to RH 5.3 which has the driver bug fixed.</description>
      <pubDate>Wed, 06 May 2009 05:55:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413510#M56306</guid>
      <dc:creator>John Bigg</dc:creator>
      <dc:date>2009-05-06T05:55:28Z</dc:date>
    </item>
    <item>
      <title>Re: heartbeat status reporting down?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413511#M56307</link>
      <description>we are on RH 5.3 and both are using Intel Pro/1000. I do think your right though, its just a reporting issue, the cluster performs properly. do you happen to know the exact drivers involved?</description>
      <pubDate>Wed, 06 May 2009 13:59:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/heartbeat-status-reporting-down/m-p/4413511#M56307</guid>
      <dc:creator>C Lamb</dc:creator>
      <dc:date>2009-05-06T13:59:59Z</dc:date>
    </item>
  </channel>
</rss>

