<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Two node SG cluster with one network in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348198#M686307</link>
    <description>Ok, I got it now.&lt;BR /&gt;&lt;BR /&gt;Somehow I thought this would be a logical setup.&lt;BR /&gt;I will try to get a dedicated heartbest lan connected.&lt;BR /&gt;&lt;BR /&gt;Thank you, all.</description>
    <pubDate>Fri, 30 Jan 2009 07:38:50 GMT</pubDate>
    <dc:creator>swaggart</dc:creator>
    <dc:date>2009-01-30T07:38:50Z</dc:date>
    <item>
      <title>Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348193#M686302</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have a two node cluster running HP-UX 11.31 and SG 11.18.&lt;BR /&gt;Their connected with only one vlan, of course also transporting heartbeats.&lt;BR /&gt;&lt;BR /&gt;I'm trying to test failover by unplugging the lan cable on the node running the package, and expect the package to go down and start on the second node.&lt;BR /&gt;The only thing happening is that the second node reboots.&lt;BR /&gt;&lt;BR /&gt;Can anyone help me with this config ?&lt;BR /&gt;&lt;BR /&gt;Regards</description>
      <pubDate>Fri, 30 Jan 2009 07:23:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348193#M686302</guid>
      <dc:creator>swaggart</dc:creator>
      <dc:date>2009-01-30T07:23:58Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348194#M686303</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;You have not properly configured SG.&lt;BR /&gt;&lt;BR /&gt;SG needs to have a separate network for heartbeat or it can not respond normally to network problems.&lt;BR /&gt;&lt;BR /&gt;A hub between two non-primary NIC cards is enough.&lt;BR /&gt;&lt;BR /&gt;The second node rebooting is called TOC, transfer of control. This is a normal response to loss of heartbeat.&lt;BR /&gt;&lt;BR /&gt;The two nodes race for control of the lock device and the second node is losing this race and gets booted to avoid data corruption.&lt;BR /&gt;&lt;BR /&gt;Take a look at the logs and you will see the response is normal. Your configuration is not robust and is unreliable by design.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 30 Jan 2009 07:33:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348194#M686303</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-01-30T07:33:12Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348195#M686304</link>
      <description>If you only have one network connection, you have an unsupported configuration.&lt;BR /&gt;What you are seeing indicates to me that you are using a cluster lock disk, and not a Quorum server, and hence this is often a normal scenario given only one heartbeat network connection.&lt;BR /&gt;The server that gets the cluster lock will stay up,and the other node will be forced to TOC.&lt;BR /&gt;I also guess that the cluster lock disk is in a VG that the package uses, and so as the node running the package has the VG activated, it has the faster access to the lock.&lt;BR /&gt;&lt;BR /&gt;Consider using more than one network for a stansby or additional heartbeat, or use a QS rather than a cluster lock disk&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 30 Jan 2009 07:33:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348195#M686304</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2009-01-30T07:33:45Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348196#M686305</link>
      <description>did you configure LAN failover?&lt;BR /&gt;&lt;BR /&gt;i guess this is for there is no LAN failover configuration.</description>
      <pubDate>Fri, 30 Jan 2009 07:35:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348196#M686305</guid>
      <dc:creator>Jeeshan</dc:creator>
      <dc:date>2009-01-30T07:35:01Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348197#M686306</link>
      <description>Yes I can help you with this config...&lt;BR /&gt;&lt;BR /&gt;by telling you its not a supported configuration:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/B3936-90122/ch02s02.html" target="_blank"&gt;http://docs.hp.com/en/B3936-90122/ch02s02.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;How on earth do you expect this sort of configuration to work? With only one network connection and a 2 node cluster, if the network connection is broken, the other node doesn't "know" the state of the first node - I'm assuming you're using a cluster lock disk - so in this case you'll get a race for the cluster lock as the only way to detemine cluster membership. Unfortunately the node that *you* know is good, loses the race (of course the cluster nodes have no way of knowing who is good, or at least no way of knowing who is *better*)&lt;BR /&gt;&lt;BR /&gt;THis sort of config can be made to work a little better if you use a quorum server on a third node somewhere instead of a cluster lock disk - that way, only a node with a surviving network connection can win a race for a cluster lock.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Fri, 30 Jan 2009 07:35:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348197#M686306</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2009-01-30T07:35:11Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348198#M686307</link>
      <description>Ok, I got it now.&lt;BR /&gt;&lt;BR /&gt;Somehow I thought this would be a logical setup.&lt;BR /&gt;I will try to get a dedicated heartbest lan connected.&lt;BR /&gt;&lt;BR /&gt;Thank you, all.</description>
      <pubDate>Fri, 30 Jan 2009 07:38:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348198#M686307</guid>
      <dc:creator>swaggart</dc:creator>
      <dc:date>2009-01-30T07:38:50Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348199#M686308</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;You already got stern warnings that your ServiceGuard configuration is not designed&lt;BR /&gt;according to best practices.&lt;BR /&gt;&lt;BR /&gt;In fact, you run unsupported setup.&lt;BR /&gt;&lt;BR /&gt;By the way, HP ACSL lab has created a tool&lt;BR /&gt;which can be used by HP Support and&lt;BR /&gt;Consulting to optimize the Node Timeout and&lt;BR /&gt;Heartbeat Interval values used in&lt;BR /&gt;Serviceguard clusters. &lt;BR /&gt;&lt;BR /&gt;The HELM (Heartbeat Exchange Latency&lt;BR /&gt;Monitor) tool runs on HP-UX 11iv1&lt;BR /&gt;(11.11), 11iv2 (11.23), and 11iv3 (11.31),&lt;BR /&gt;and measures latency for the cluster nodes&lt;BR /&gt;(which might be caused by network delays or&lt;BR /&gt;heavy system loads) over a user defined&lt;BR /&gt;period of time. When the HELM run is&lt;BR /&gt;complete, the tool outputs the measured&lt;BR /&gt;latencies and based on these measurements,&lt;BR /&gt;suggests optimized values for the&lt;BR /&gt;NODE_TIMEOUT and HEARTBEAT_INTERVAL cluster&lt;BR /&gt;configuration parameters for both standard&lt;BR /&gt;Serviceguard clusters and clusters utilizing&lt;BR /&gt;the Serviceguard Extension for Faster&lt;BR /&gt;Failover product. &lt;BR /&gt;&lt;BR /&gt;When I teach ServiceGuard (coincidentaly,&lt;BR /&gt;I am teaching HP H6487 course next week&lt;BR /&gt;here in Australia), I always mention HELM&lt;BR /&gt;too. Pity not many people are aware of it.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;VK2COT</description>
      <pubDate>Fri, 30 Jan 2009 07:41:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348199#M686308</guid>
      <dc:creator>VK2COT</dc:creator>
      <dc:date>2009-01-30T07:41:43Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348200#M686309</link>
      <description>hi &lt;BR /&gt;&lt;BR /&gt;the syslog cn be referred to as to confirm that the Node on which the package was running that only happens to be Clutser manager during the cluster reform.&lt;BR /&gt;&lt;BR /&gt;in the action of taking the Heart-beat cable out of the Primary node, a cluster reformation occurs, in which the Active node on which the Cluster manager had been sitting earlier happens to become the master and the coordinator and so it gives TOC to the other node as it no more can receive the Hearbeat from the other node.&lt;BR /&gt;&lt;BR /&gt;This is i think what should happen nomally.&lt;BR /&gt;&lt;BR /&gt;can refer to the syslog of both the nodes for this event.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Sujit</description>
      <pubDate>Fri, 30 Jan 2009 07:42:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348200#M686309</guid>
      <dc:creator>sujit kumar singh</dc:creator>
      <dc:date>2009-01-30T07:42:29Z</dc:date>
    </item>
    <item>
      <title>Re: Two node SG cluster with one network</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348201#M686310</link>
      <description>The reason the package did not shut down on one node is how would the other node know when the package was shut down in order to start it.&lt;BR /&gt;&lt;BR /&gt;So this is how it works.  If a node cannot stay in the cluster he tries for the lock disk (if it is a 2 node cluster).  The node that gets the lock disk forms a cluster and continues.  The node that does not get the lock disk cannot form a cluster.  The reason that he cannot shutdown the package is he cannot tell the other node that the package is shutdown before the other node starts it so the only way the failed node can make sure that he is not writing to the disk is to panic.&lt;BR /&gt;&lt;BR /&gt;The node that formed the cluster knows he is the only one to survive but cannot know when the other node finished the stop script but because of the assumption that the other node paniced if it was still alive allows the surviving node to just start the package.&lt;BR /&gt;&lt;BR /&gt;I hope this makes sense and helps&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 31 Jan 2009 00:35:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/two-node-sg-cluster-with-one-network/m-p/4348201#M686310</guid>
      <dc:creator>Emil Velez</dc:creator>
      <dc:date>2009-01-31T00:35:00Z</dc:date>
    </item>
  </channel>
</rss>

