1837116 Members
2086 Online
110112 Solutions
New Discussion

Cluster 4 nodes (2 down)

 
SOLVED
Go to solution
elnene
New Member

Cluster 4 nodes (2 down)

If i got a cluster with 4 nodes and 2 of them crash, could my cluster still running in 2 nodes with 2 packages on it?

Thanks for your help.

:)
9 REPLIES 9
Ludovic Derlyn
Esteemed Contributor

Re: Cluster 4 nodes (2 down)

hi,

normally yes, if you have defined correctly primary node and adoptive node

primary node host A
Adoptive node host B
Adoptive node host C
Adoptive node host D

Regards

L-DERLYN
Steven E. Protter
Exalted Contributor

Re: Cluster 4 nodes (2 down)

Shalom elnene,

Yes.

It depends on your cluster design.

If you set it up so that packages can run on any node, then you can even function with three nodes down.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
RAC_1
Honored Contributor

Re: Cluster 4 nodes (2 down)

If package switching was enabled. check the logs.
There is no substitute to HARDWORK
Sameer_Nirmal
Honored Contributor
Solution

Re: Cluster 4 nodes (2 down)

In order to have cluster itself running, you need to have a cluster lock configured as an arbitrator in case of 2 node (50%) failure. The cluster lock/s configuration is dependant on the setup and should be done in such a way that any two nodes would aquire/see it and form their own cluster. The cluster lock/s hardware should not be SPOF. This cluster lock acts as vote to gain the majority of >50% which is a requirement for cluster formation and running.

If you don't have cluster lock configured, then the cluster would collapse or halt.

Upto four node cluster, cluster lock is allowed. If you have more than 4 nodes in a cluster, then quorum server should be installed in the same subnet of the existing cluster.

As far as the clustered packages are concerned, they would start on the adpotive nodes if they are enabled to run on those nodes.

Refer this doc as well.
http://docs.hp.com/en/B3936-90070/ch01s03.html?btnNext=next%A0%BB
Chauhan Amit
Respected Contributor

Re: Cluster 4 nodes (2 down)

Hello,

The Answer is Yes. In a 4-node cluster setup you need to have a Quorum Server which will decide the cluster membership and survival of nodes , it will be able to handle 2-Node crash.
Package will be transferred to the surviving nodes.

-Amit
If you are not a part of solution , then you are a part of problem
elnene
New Member

Re: Cluster 4 nodes (2 down)

And if the quorum node crash the cluster still running?
Chauhan Amit
Respected Contributor

Re: Cluster 4 nodes (2 down)

Quorum server is not the part of the cluster. It has to be installed on the separate system but should be reachable from all the nodes.
In case the quorum system crashes , it has no effect on the "Running Cluster" , but if some nodes fails as the case you mentioned and the quorum server also crashes then all the node will get TOC.

-Amit
If you are not a part of solution , then you are a part of problem
Steven E. Protter
Exalted Contributor

Re: Cluster 4 nodes (2 down)

If the quorum server fails the cluster will continue to run. If there is a problem the cluster may not failover properly while the quorum server is down.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Serviceguard for Linux
Honored Contributor

Re: Cluster 4 nodes (2 down)

To be more specific:

Serviceguard uses the "current" membership count as the key in recovering fromm any failure. If the original cluster node count was 16, and 2 had failed, it will use 14 as the "current".

The other key point is that Serviceguard can reconfigure the cluster and recover from any failure that does not bring the "surviving" node count to 50% or less than the last "current" membership. So in the example above, as long as there are 8 nodes out of the 14 that survive, Serviceguard will reconfigure without needing to use a LockDisk or Quorum Service.

Once the "surviving" count is 50% (7 out of 14 in this example) then Serviceguard needs a quorum device. The Quorum Service can always be used. I'm pretty sure the LockDisk has a maximum node count.

For your 4 node cluster a lockdisk or quorum service can be used. If you were to lose 1 node at a time (go from 4 nodes to 3 nodes to 2 nodes to 1 node), the lock disk would only be necessary when going from 2 nodes to 1 node.

In your example of 4 nodes to 2 node, the lockdisk (or quorum service) would be used.

One major benefit of the quorum servce is that it can support up to 100 nodes across a number of clusters.