<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MCSG LAN fail over issue. in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973652#M57091</link>
    <description>Steven,&lt;BR /&gt;That is my exact issue. Unplugging both cables does not trigger a failover. Based on my experience building clusters on HP-UX, I expected it would, but it did not. I am wondering if perhaps I am missing something, or there is another issue here.</description>
    <pubDate>Fri, 14 Apr 2006 08:10:06 GMT</pubDate>
    <dc:creator>Joe Short</dc:creator>
    <dc:date>2006-04-14T08:10:06Z</dc:date>
    <item>
      <title>MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973650#M57089</link>
      <description>I have an MCSG (v11.16) cluster configured on 2 DL585 servers running RHEL AS 4 32 bit.&lt;BR /&gt;I have configured bonding for 2 of 3 NICs on each server, the third NIC is being used for dedicated heartbeat on a dedicated network.&lt;BR /&gt;With the package up and running on the promary server, I tested bonding by unplugging a NIC. The bond worked, and no fail over occured. however, when I unplugged the second NIC in the bond (only 2 NICs bonded) again no failover occurred. On HP-UX this would have triggered a failover of the package. When I completely disconnected all NICs on the primary server, the alternate server crashed and rebooted, but did not take the package.&lt;BR /&gt;Is this normal behavior on LINUX, or did I miss something? If so, what might I have missed?</description>
      <pubDate>Thu, 13 Apr 2006 14:56:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973650#M57089</guid>
      <dc:creator>Joe Short</dc:creator>
      <dc:date>2006-04-13T14:56:49Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973651#M57090</link>
      <description>Shalom Joe,&lt;BR /&gt;&lt;BR /&gt;You slightly misunderstand bonding in Linux. Unplugging one of the two bond cables does not result in a failure. Bonding defaults to active-passive on Linux and you can't force active-active except on intel nics.&lt;BR /&gt;&lt;BR /&gt;So: If you unplug both cables, you should trigger a failover.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 14 Apr 2006 07:11:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973651#M57090</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-04-14T07:11:33Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973652#M57091</link>
      <description>Steven,&lt;BR /&gt;That is my exact issue. Unplugging both cables does not trigger a failover. Based on my experience building clusters on HP-UX, I expected it would, but it did not. I am wondering if perhaps I am missing something, or there is another issue here.</description>
      <pubDate>Fri, 14 Apr 2006 08:10:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973652#M57091</guid>
      <dc:creator>Joe Short</dc:creator>
      <dc:date>2006-04-14T08:10:06Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973653#M57092</link>
      <description>Ah!&lt;BR /&gt;&lt;BR /&gt;Both cables and no failover.&lt;BR /&gt;&lt;BR /&gt;What do the serviceguard logs say? &lt;BR /&gt;&lt;BR /&gt;I would tend to think the failover LAN configuration is wrong or the NIC is bad or the card is plugged into the wrong lan.&lt;BR /&gt;&lt;BR /&gt;My question now is can a NIC card be a failover LAN and a heartbeat LAN. My understanding of SG is that its either or, not both.&lt;BR /&gt;&lt;BR /&gt;Please clarify.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 14 Apr 2006 08:15:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973653#M57092</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2006-04-14T08:15:42Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973654#M57093</link>
      <description>If you define your NIC as HEARTBEAT_IP, both HB and Data can be used on that NIC. If it is defined as STATIONARY_IP, no HB is passed over it. What I have is 2 servers, each with 3 NICs. 2 NICs are in a bond, the third is on a separate network, that is simply between the 2 clustered servers. It is there to pass HB in the event the production network should go dark. If both bonded NICs on a server  are unplugged, the cluster should respond by moving the package to the alternate server. In this case, that did not occur. My cluster config file is attached.</description>
      <pubDate>Fri, 14 Apr 2006 08:29:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973654#M57093</guid>
      <dc:creator>Joe Short</dc:creator>
      <dc:date>2006-04-14T08:29:06Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973655#M57094</link>
      <description>And the package log (I have a single package) and system log do not indicate anything out of the ordinary. I am wondering if the NODE_TIMEOUT parameter is set too high.</description>
      <pubDate>Fri, 14 Apr 2006 08:30:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973655#M57094</guid>
      <dc:creator>Joe Short</dc:creator>
      <dc:date>2006-04-14T08:30:54Z</dc:date>
    </item>
    <item>
      <title>Re: MCSG LAN fail over issue.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973656#M57095</link>
      <description>I had the bonding driver set incorrectly. &lt;BR /&gt;In /etc/modprobe.conf there is an options entry for the bond. it should read as follows&lt;BR /&gt;&lt;BR /&gt;options bond0 miimon=100 mode=1&lt;BR /&gt;&lt;BR /&gt;My was incorrect it read&lt;BR /&gt;&lt;BR /&gt;options bond0 miimon=100 mode=0&lt;BR /&gt;&lt;BR /&gt;Mode 1 is failover mode&lt;BR /&gt;Mode 0 is load balancing mode.</description>
      <pubDate>Fri, 14 Apr 2006 10:38:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mcsg-lan-fail-over-issue/m-p/4973656#M57095</guid>
      <dc:creator>Joe Short</dc:creator>
      <dc:date>2006-04-14T10:38:54Z</dc:date>
    </item>
  </channel>
</rss>

