<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cluster Behavior in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461993#M699982</link>
    <description>If You have an old D- or A-Class standing around, install it as a MC/SG Quorum server, the product is free, as far as I remeber.&lt;BR /&gt;By placing it outside of the heartbeat lan it is a safe way of eliminating lan-wise issues.&lt;BR /&gt;One node that sees the quorum server get's &amp;gt;50% -&amp;gt; comes up, the other one (obviously disconnected in some way) panics.&lt;BR /&gt;Two nodes that see each other -&amp;gt; comes up&lt;BR /&gt;Two nodes that see each other and the quorum server -&amp;gt; happily ever after.&lt;BR /&gt;Also, if the nodes are close to each other, try to have both a separate heartbeat and public lan and added to that serial heartbeat. I costs less than $100 but helps a lot.&lt;BR /&gt;&lt;BR /&gt;According to my book here, there's no quorum disk support on HP-UX, sorry :)</description>
    <pubDate>Thu, 13 Jan 2005 07:11:43 GMT</pubDate>
    <dc:creator>Florian Heigl (new acc)</dc:creator>
    <dc:date>2005-01-13T07:11:43Z</dc:date>
    <item>
      <title>Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461986#M699975</link>
      <description>Hello,&lt;BR /&gt;I have configured a two nodes cluster with two packages as well. Right now us working fine, but I have a question concerning a test I did.&lt;BR /&gt;I shutdown both nodes and just started one of them, I expected the starting node  would bring the cluster up ,and also the two packages, but the cluster did not form it. &lt;BR /&gt;&lt;BR /&gt;When I tried to  manually run the cluster  &lt;BR /&gt;#cmruncl -v&lt;BR /&gt;&lt;BR /&gt;I could not  do it until the other node was running again.&lt;BR /&gt;&lt;BR /&gt;Is there a way to bring the cluster up  even if the other node is down?&lt;BR /&gt;&lt;BR /&gt;Is the concept of the test itself right?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks for your help, any advice will be welcome</description>
      <pubDate>Wed, 12 Jan 2005 18:34:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461986#M699975</guid>
      <dc:creator>Nancy Calderón_1</dc:creator>
      <dc:date>2005-01-12T18:34:43Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461987#M699976</link>
      <description>&lt;BR /&gt;Hi again,&lt;BR /&gt;&lt;BR /&gt;I forgot to ask if there is a   configuration file where to set the value to start the cluster with the 50 % of the nodes?&lt;BR /&gt;&lt;BR /&gt;Thanks again for your help.&lt;BR /&gt;&lt;BR /&gt;NC</description>
      <pubDate>Wed, 12 Jan 2005 18:44:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461987#M699976</guid>
      <dc:creator>Nancy Calderón_1</dc:creator>
      <dc:date>2005-01-12T18:44:40Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461988#M699977</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;you can use   cmruncl -n nodename   to start the cluster on only one node.&lt;BR /&gt;&lt;BR /&gt;if cluster is already running on one node you have to use cmrunnode &amp;amp; cmrunpkg commands to start the other node and pkg respectively .&lt;BR /&gt;&lt;BR /&gt;regds,&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Jan 2005 01:25:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461988#M699977</guid>
      <dc:creator>bhavin asokan</dc:creator>
      <dc:date>2005-01-13T01:25:34Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461989#M699978</link>
      <description>Service Guard expects a quorum of more than 50% of cluster nodes to be active at startup.&lt;BR /&gt;In a two node cluster this means that all nodes need to be up and running for normal cluster startup.&lt;BR /&gt;&lt;BR /&gt;If one of your servers is down, ServiceGuard cannot form the cluster. You have to startup the cluster as a one one cluster using&lt;BR /&gt;&lt;BR /&gt;cmruncl -n &lt;NODENAME&gt;&lt;BR /&gt;&lt;BR /&gt;This way SG will start up. At a later time you may bring up the second node and join it to the cluster&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Rainer&lt;BR /&gt;&lt;BR /&gt;&lt;/NODENAME&gt;</description>
      <pubDate>Thu, 13 Jan 2005 01:37:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461989#M699978</guid>
      <dc:creator>Rainer von Bongartz</dc:creator>
      <dc:date>2005-01-13T01:37:26Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461990#M699979</link>
      <description>Don't you think just like the way we do in VMS, having a QUORUM disk will help in this scenario ?&lt;BR /&gt;&lt;BR /&gt;rgds&lt;BR /&gt;Mobeen</description>
      <pubDate>Thu, 13 Jan 2005 02:47:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461990#M699979</guid>
      <dc:creator>Mobeen_1</dc:creator>
      <dc:date>2005-01-13T02:47:09Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461991#M699980</link>
      <description>The thing about this situation is that in a two node cluster, when just one of the nodes comes up it cannot tell whether the other node is really down or whether its merely the network connections are at fault and the other server is in fact still up with switched packages and actively using the storage.  If this node started up packages automatically without having communicated to another active server it could re-mount already mounted filesystems, start databases and corrupt the data.  How is a node in this situation to know whether only it just TOC'd or both of them?&lt;BR /&gt;&lt;BR /&gt;So, when in doubt, ServiceGuard always stays down and therefore keeps your data safe and uncorrupted.  This forces you, the system administrator, to investigate the situation and to decide whether to force up the remaining node, using the commands described in a previous postings.&lt;BR /&gt;&lt;BR /&gt;We do have the concept of quorum servers that arbitrate.&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Jan 2005 05:05:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461991#M699980</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2005-01-13T05:05:17Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461992#M699981</link>
      <description>This is standard Serviceguard behaviour. At the time of starting the cluster, all nodes need to be present and able to join in the cluster. If one node is not available, all ohter nodes will attempt to form a cluster, waiting the default timeout of 10 minutes, at which time they will cease to atte,mpt to start/join the cluster.&lt;BR /&gt;&lt;BR /&gt;To start a cluster where one or more nodes are unavailable, use the cmruncl -n nodename command to get the cluster to start on the first node, then if there ar eadditional node do cmrunnode on each of them to get them to join the already running cluster.&lt;BR /&gt;There is NO WAY to bypass this designed method of starting the cluster.</description>
      <pubDate>Thu, 13 Jan 2005 05:07:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461992#M699981</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2005-01-13T05:07:44Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461993#M699982</link>
      <description>If You have an old D- or A-Class standing around, install it as a MC/SG Quorum server, the product is free, as far as I remeber.&lt;BR /&gt;By placing it outside of the heartbeat lan it is a safe way of eliminating lan-wise issues.&lt;BR /&gt;One node that sees the quorum server get's &amp;gt;50% -&amp;gt; comes up, the other one (obviously disconnected in some way) panics.&lt;BR /&gt;Two nodes that see each other -&amp;gt; comes up&lt;BR /&gt;Two nodes that see each other and the quorum server -&amp;gt; happily ever after.&lt;BR /&gt;Also, if the nodes are close to each other, try to have both a separate heartbeat and public lan and added to that serial heartbeat. I costs less than $100 but helps a lot.&lt;BR /&gt;&lt;BR /&gt;According to my book here, there's no quorum disk support on HP-UX, sorry :)</description>
      <pubDate>Thu, 13 Jan 2005 07:11:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461993#M699982</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-13T07:11:43Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461994#M699983</link>
      <description>Having a Quorum server or whategver has nothing to do with the observed actions of the cluster, and would make no diffference to these actions. As stated, this is the method of operation that was designed into Serviceguard to try to ensure as much as possible the safety of customer data.&lt;BR /&gt;Basically, when in doubt, don't do anything.</description>
      <pubDate>Thu, 13 Jan 2005 07:25:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461994#M699983</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2005-01-13T07:25:34Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461995#M699984</link>
      <description>This is by design.&lt;BR /&gt;&lt;BR /&gt;cmruncl -n nodename&lt;BR /&gt;&lt;BR /&gt;More info in:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/B3936-90079/index.html" target="_blank"&gt;http://docs.hp.com/en/B3936-90079/index.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Thu, 13 Jan 2005 09:05:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461995#M699984</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2005-01-13T09:05:20Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461996#M699985</link>
      <description>I have to correct myself one more time ;)&lt;BR /&gt;&lt;BR /&gt;there's a lock disk mechanism available.&lt;BR /&gt;&lt;BR /&gt;Good luck,&lt;BR /&gt;florian</description>
      <pubDate>Thu, 13 Jan 2005 10:44:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461996#M699985</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-01-13T10:44:25Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461997#M699986</link>
      <description>Melvyn,&lt;BR /&gt;&lt;BR /&gt;being a VMS man myself who is "strongly advised  ( !! )" by the employer to "gather Unix knowledge as well", I am trying to follow this forum as well.&lt;BR /&gt;Please accept my different perspective, and relative ignorance.&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt; At the time of starting the cluster, all nodes need to be present and able to join in the cluster&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Is that really true? How do you go about if after some time you need to add an extra node for extra capacity, or to replace an older system by a newer, more powerfull model?&lt;BR /&gt;&lt;BR /&gt;Do I correctly read your quote as in that case you need to bring down the cluster for re-config, or am I missing something?&lt;BR /&gt;&lt;BR /&gt;Would that not be a serious breach of 24 * 365 operation?&lt;BR /&gt;&lt;BR /&gt;What I understood so far about Tru64 cluster, they seem to be able to add nodes on the fly, I was assuming that for HPUX as well.&lt;BR /&gt;&lt;BR /&gt;... just trying to learn...&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Thu, 13 Jan 2005 13:34:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461997#M699986</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-01-13T13:34:59Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Behavior</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461998#M699987</link>
      <description>Hi everyone,&lt;BR /&gt;&lt;BR /&gt;Thanks for helping me to clearly understand the way SG works.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Jan 2005 14:44:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-behavior/m-p/3461998#M699987</guid>
      <dc:creator>Nancy Calderón_1</dc:creator>
      <dc:date>2005-01-13T14:44:03Z</dc:date>
    </item>
  </channel>
</rss>

