<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How does two-node cluster works? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653407#M712082</link>
    <description>Or this document may help.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/B7491-90001/00/00/96-con.html" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/B7491-90001/00/00/96-con.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;Kenny.</description>
    <pubDate>Sat, 26 Jan 2002 03:05:51 GMT</pubDate>
    <dc:creator>Kenny Chau</dc:creator>
    <dc:date>2002-01-26T03:05:51Z</dc:date>
    <item>
      <title>How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653404#M712079</link>
      <description>I was wondering in a two-node cluster environment if one node hangs, how can another node forces a fail-over and take over cluster lock disk? Because I believe when one node hangs it is still holding on the cluster lock-disk, how can another node take over it?&lt;BR /&gt;Thanks,&lt;BR /&gt;</description>
      <pubDate>Sat, 26 Jan 2002 02:32:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653404#M712079</guid>
      <dc:creator>zhaogui</dc:creator>
      <dc:date>2002-01-26T02:32:52Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653405#M712080</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Hope this documents can answer your questions.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/B3936-90024/B3936-90024.html" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/B3936-90024/B3936-90024.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;Kenny.</description>
      <pubDate>Sat, 26 Jan 2002 02:55:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653405#M712080</guid>
      <dc:creator>Kenny Chau</dc:creator>
      <dc:date>2002-01-26T02:55:18Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653406#M712081</link>
      <description>Hi:&lt;BR /&gt;&lt;BR /&gt;If the normal communication heartbeat between nodes in a cluster ceases, the 'cmcld' deamon on *both* hosts will attempt to obtain control of the cluster lock disk.  In this "race", the first node to reach the lock disk marks it as its own.  When the other node notes this update, it performs a TOC (Transfer of Control = reboot) leaving the first node to reach the lock the package owner.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Sat, 26 Jan 2002 03:04:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653406#M712081</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2002-01-26T03:04:30Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653407#M712082</link>
      <description>Or this document may help.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/B7491-90001/00/00/96-con.html" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/B7491-90001/00/00/96-con.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;Kenny.</description>
      <pubDate>Sat, 26 Jan 2002 03:05:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653407#M712082</guid>
      <dc:creator>Kenny Chau</dc:creator>
      <dc:date>2002-01-26T03:05:51Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653408#M712083</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Btw, you can have two cluster lock disks using the SECOND_CLUSTER_LOCK_DISK parameter and the SECOND_CLUSTER_LOCK_VG. &lt;BR /&gt;&lt;BR /&gt;If you are designing a fault-tolerant cluster, then one failure you should be concerned with is that of the split brain syndrome.&lt;BR /&gt;&lt;BR /&gt;Without an arbitrator, you would get a split brain if the following occurred simultaneously:&lt;BR /&gt;1) Heartbeat fails&lt;BR /&gt;2) Link from server A (primary node) to cluster lock disk B fails and link from server B (secondary node) to cluster lock disk A fails. &lt;BR /&gt;&lt;BR /&gt;Note that the split brain syndrome can cause data inconsistency. According to HP documents, planning different physical routes for both network and data connections or adequately protecting the physical routes greatly reduces the possibility of split brain syndrome. Also remember that the cluster lock disks should be separately powered, if possible. &lt;BR /&gt;&lt;BR /&gt;If you want a fault-tolerant architecture which avoids the split brain syndrome, you will need at least one arbitrator node. Arbitrators provide functionality like that of the cluster lock disk, and act as tie-breakers for a cluster quorum in case all of the nodes in one data center go down at the same time.&lt;BR /&gt;&lt;BR /&gt;Hope this helps. Regards.&lt;BR /&gt;&lt;BR /&gt;Steven Sim Kok Leong</description>
      <pubDate>Sat, 26 Jan 2002 03:14:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653408#M712083</guid>
      <dc:creator>Steven Sim Kok Leong</dc:creator>
      <dc:date>2002-01-26T03:14:33Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653409#M712084</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Try the MC/SG FAQ,&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/hpux/onlinedocs/ha/haFAQindex2.html" target="_blank"&gt;http://docs.hp.com/hpux/onlinedocs/ha/haFAQindex2.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;Regds&lt;BR /&gt;</description>
      <pubDate>Sat, 26 Jan 2002 05:36:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653409#M712084</guid>
      <dc:creator>Sanjay_6</dc:creator>
      <dc:date>2002-01-26T05:36:45Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653410#M712085</link>
      <description>But, if one server already hanged there, how can ir reboot/TOC by itself? In this case, will cmcld bin able to TOC this node? If not, how can this node release all the shared disks it has all the while been occupying?</description>
      <pubDate>Sat, 26 Jan 2002 09:39:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653410#M712085</guid>
      <dc:creator>zhaogui</dc:creator>
      <dc:date>2002-01-26T09:39:59Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653411#M712086</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;MC/ServiceGuard is a cluster solution but no clustering solution in the world can take care of all failure cases and conditions. If your OS of the primary node somehow gets corrupted and is in an unstable state whereby the failure conditions are not yet met, your secondary node will not takeover. Example is a disk failure or data corruption.&lt;BR /&gt;&lt;BR /&gt;As such, it is important to have a complete fault-tolerant architecture which includes hardware RAID arrays (eg. a SAN solution).&lt;BR /&gt;&lt;BR /&gt;Hope this helps. Regards.&lt;BR /&gt;&lt;BR /&gt;Steven Sim Kok Leong</description>
      <pubDate>Sat, 26 Jan 2002 11:40:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653411#M712086</guid>
      <dc:creator>Steven Sim Kok Leong</dc:creator>
      <dc:date>2002-01-26T11:40:30Z</dc:date>
    </item>
    <item>
      <title>Re: How does two-node cluster works?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653412#M712087</link>
      <description>ServiceGuard can in most situations cause hung nodes to TOC themselves - this is achieved using a timer within the kernel, referred to within the SG documentation as the kernel safety timer. The timer is constantly counting down to 0 within the kernel, but every time cmcld grabs some CPU time cycles, it resets the timer. That way if the system seems to be hung in any way, it doesn't matter that cmcld never gets cpu cycles to check for this, as the kernel counts the timer down to zero, and causes the node to TOC.&lt;BR /&gt;&lt;BR /&gt;You can simulate this by killing the cmcld daemon - as the daemon can't now update the safety timer, the node TOCs shortly after this.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Sat, 26 Jan 2002 22:13:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/how-does-two-node-cluster-works/m-p/2653412#M712087</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2002-01-26T22:13:12Z</dc:date>
    </item>
  </channel>
</rss>

