Operating System - HP-UX
1752749 Members
5081 Online
108789 Solutions
New Discussion

4 node Cluster without cluster lock

 
CRbosupportohp
Occasional Contributor

4 node Cluster without cluster lock

Hi,

I have found several articles about this configuration,4 cluster node without cluster lock with redudant heartbeat connections between them.

My questions are:

If 2 nodes goes down , toc shutdown or init 0, the other 2 node (50%)  remain up or goes in toc all cluster ?

During a complete startup at leat two nodes have to do up and running to start the cluster ?

Thanks

3 REPLIES 3
donna hofmeister
Trusted Contributor

Re: 4 node Cluster without cluster lock

the attached should help answer your questions

Stephen Doud
Honored Contributor

Re: 4 node Cluster without cluster lock

Cluster arbitration (cluster lock, lock LUN or Quorum server) is REQUIRED for a split-brain situation.  If an arbitration device is not available, the remaining functioning nodes will TOC (memory dump and reboot).

 

Quorum rules and cluster membership
Serviceguard response to an abrupt departure of a node is determined by
quorum rules.  An abrupt departure can be the result of HB network failure,
or reboot/panic/power-failure.

Greater-than-50% of quorum (>50%)
If the number of nodes remaining in HB contact with one another comprise
more than 50% of the original membership, these nodes automatically reform
a cluster.  Packages that were orphaned as a result of the reformation are
adopted by the next-in-line active node if package "global" (AUTO_RUN) and
"node switching" are enabled for that package and node.

Equal-to-50% of quorum (=50%)
If a HB protocol failure occurs between half of the cluster membership and
the other half, cluster arbitration(6) is required to prevent split-brain
clusters.  The nodes seek out the arbitration device to gain authorization to
form a new cluster.  The first half to contact the arbitration device and
receive authorization reforms a reduced cluster.

If the other half of the cluster membership is still active, but late in
addressing the arbitration device, it is forced to TOC (reboot) to preserve
data integrity (it's packages could possibly be adopted by the reforming
sub-cluster).

Less-than-50% of quorum (<50%)
A node or nodes that find themselves in a minority of the original cluster
membership are forced to TOC to preserve data integrity (see previous
paragraph). 


Steven E. Protter
Exalted Contributor

Re: 4 node Cluster without cluster lock

Shalom,

 

With a properly set up quorum server, at least one node should not TOC.  In the event of a total loss of heartbeat, the quorum server decides which nodes TOC and which nodes stay up.

 

It is certainly possible for unforseen events to cause all nodes to boot, but unlikely.

 

SEP

Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com