<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Three nodes cluster in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867122#M709095</link>
    <description>Hi Manuel,&lt;BR /&gt;  You can have Lock VG for 3 node cluster. It is adviced to&lt;BR /&gt;have one. If you dont have any common disk for all 3 nodes, from M/C SG 11.14 there is a option for Quaruom Server. You can configure this&lt;BR /&gt;Quaroum server on separate server and configure 3 nodes to use this servers to check it.&lt;BR /&gt;&lt;BR /&gt;But I wish you to check the following.&lt;BR /&gt;&lt;BR /&gt;Your hearbeat network is physically separate from your data network. &lt;BR /&gt;i.e. Have a separate hub atleast and connect the heartbeat lan interface's to that. I belive you have 2 lan interfaces on your nodes.&lt;BR /&gt;&lt;BR /&gt;Also you can configure additional heartbeat link on your data lan interface. This may act as secondary heartbeat link. So even if your primary hb link goes down for any reason, your cluster will not go down. &lt;BR /&gt;&lt;BR /&gt;And yes you can increase the heartbeat interval. It should be between 5 sec to 8 sec. &lt;BR /&gt;Also check the node timeout.&lt;BR /&gt;&lt;BR /&gt;Remember if you change these parameter's, you have to rebuild the cluster config and you may need a outage.&lt;BR /&gt;&lt;BR /&gt;Hope this helps you to solve your problem&lt;BR /&gt;&lt;BR /&gt;Srini.&lt;BR /&gt;</description>
    <pubDate>Tue, 24 Dec 2002 00:00:04 GMT</pubDate>
    <dc:creator>avsrini</dc:creator>
    <dc:date>2002-12-24T00:00:04Z</dc:date>
    <item>
      <title>Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867118#M709091</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;i'be installed the service guard 11.14, i did the cmquerycl to create the ascii file. i have three nodes so in a class my instructor told me that it's optional to enable the cluster lock disk and the cluster vg, so i did'nt after, i've configured a  &lt;BR /&gt;second net to operate as a hearthbeat i left the pollong interval as default, all three are in the same switch.&lt;BR /&gt;After a fews days one node toc's first then a second one.&lt;BR /&gt;The message shows that one node was unable to see the other, so tried to reform the cluster with the other node... so tocs the other tough so the same and also tocs.&lt;BR /&gt;I'm planning to increase the polling interval.&lt;BR /&gt;&lt;BR /&gt;My question is should i enable the lock disk and the cluster lock vg? that will help to not have the nodes crashed?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance</description>
      <pubDate>Wed, 18 Dec 2002 18:03:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867118#M709091</guid>
      <dc:creator>Manuel Carbajal</dc:creator>
      <dc:date>2002-12-18T18:03:30Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867119#M709092</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We've run a three node cluster before and the lock disk can be a problem, especially if not all three nodes can see a common disk.  The 11.14 version of MC/SG has support for a quorum server which takes the place of lock disks.  If you don't have a common disk that all three nodes can see, you can use the quorum server.  Otherwise, I'd do a lock disk.&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Dec 2002 18:24:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867119#M709092</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-12-18T18:24:32Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867120#M709093</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;Yes, it is optional to exclude the lock disk with a 3-node cluster, but what your instructor should have told you (and I hope he had) was that it was recommended.&lt;BR /&gt;&lt;BR /&gt;The reason for this is that in a situation where one of your nodes leaves the cluster, there is a situation in where there is a 50/50 tie among the remaining nodes.  This is where the lock disk comes into play.  Your first node TOC'ed, leaving two nodes.  What happened is you had some kind of trigger for a cluster reformation, resulting in a two-way tie between the nodes.  Since there was no lock disk as a tie breaker, there was no choice to initiate a TOC to both nodes.&lt;BR /&gt;&lt;BR /&gt;Hope this explains things.&lt;BR /&gt;&lt;BR /&gt;Chris</description>
      <pubDate>Wed, 18 Dec 2002 18:52:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867120#M709093</guid>
      <dc:creator>Christopher McCray_1</dc:creator>
      <dc:date>2002-12-18T18:52:06Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867121#M709094</link>
      <description>Lock PV is recommended on 3 node cluster. We run 3 node configuration and have lock PV in place. &lt;BR /&gt;HP sais it is optional on the configuration with over 3 nodes. 3 nodes - it is still recommended to have lock PV.&lt;BR /&gt;You have separated heartbeat network.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;0leg&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Dec 2002 19:06:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867121#M709094</guid>
      <dc:creator>Oleg Zieaev_1</dc:creator>
      <dc:date>2002-12-18T19:06:04Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867122#M709095</link>
      <description>Hi Manuel,&lt;BR /&gt;  You can have Lock VG for 3 node cluster. It is adviced to&lt;BR /&gt;have one. If you dont have any common disk for all 3 nodes, from M/C SG 11.14 there is a option for Quaruom Server. You can configure this&lt;BR /&gt;Quaroum server on separate server and configure 3 nodes to use this servers to check it.&lt;BR /&gt;&lt;BR /&gt;But I wish you to check the following.&lt;BR /&gt;&lt;BR /&gt;Your hearbeat network is physically separate from your data network. &lt;BR /&gt;i.e. Have a separate hub atleast and connect the heartbeat lan interface's to that. I belive you have 2 lan interfaces on your nodes.&lt;BR /&gt;&lt;BR /&gt;Also you can configure additional heartbeat link on your data lan interface. This may act as secondary heartbeat link. So even if your primary hb link goes down for any reason, your cluster will not go down. &lt;BR /&gt;&lt;BR /&gt;And yes you can increase the heartbeat interval. It should be between 5 sec to 8 sec. &lt;BR /&gt;Also check the node timeout.&lt;BR /&gt;&lt;BR /&gt;Remember if you change these parameter's, you have to rebuild the cluster config and you may need a outage.&lt;BR /&gt;&lt;BR /&gt;Hope this helps you to solve your problem&lt;BR /&gt;&lt;BR /&gt;Srini.&lt;BR /&gt;</description>
      <pubDate>Tue, 24 Dec 2002 00:00:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867122#M709095</guid>
      <dc:creator>avsrini</dc:creator>
      <dc:date>2002-12-24T00:00:04Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867123#M709096</link>
      <description>A cluster Lock disc is optional in a 3 node cluster, and using one MAY have prevented at least the last node crashing. I think the point you are missing here is WHY did all three lose comms with each other.??&lt;BR /&gt;Do I read your initial thread correctly, in that ALL your network links go through the same switch? If so, not a good idea, and I recommend you add another switch for redundancy.&lt;BR /&gt;I also recommedn you change the default timing parameters, as they are NOT optimal.</description>
      <pubDate>Tue, 24 Dec 2002 08:05:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867123#M709096</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2002-12-24T08:05:07Z</dc:date>
    </item>
    <item>
      <title>Re: Three nodes cluster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867124#M709097</link>
      <description>Hello, Melvyn,&lt;BR /&gt;&lt;BR /&gt;Long time, no see....&lt;BR /&gt;&lt;BR /&gt;I thought you had left us for some reason.&lt;BR /&gt;&lt;BR /&gt;It's good to hear from you again.&lt;BR /&gt;&lt;BR /&gt;Chris</description>
      <pubDate>Tue, 24 Dec 2002 11:33:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/three-nodes-cluster/m-p/2867124#M709097</guid>
      <dc:creator>Christopher McCray_1</dc:creator>
      <dc:date>2002-12-24T11:33:14Z</dc:date>
    </item>
  </channel>
</rss>

