HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

P4300 node failed in 3 node cluster - no quorum

SOLVED
Go to solution
Mike7
Occasional Contributor

P4300 node failed in 3 node cluster - no quorum

Hello - We have a cluster of 3 legacy P4300's running SAN/iQ 9.5.  One failed, and now I can't do anything with the cluster because there is no quorum.  I tried to add a virtual manager, but I get the message "You can only add a virtual manager when doing so will not cause a loss of quorum".  Any attempts to remove the failed node results in "Could not send the command because there is no quorum...".  Any suggestions on what else to try?  How can I establish quorum if it won't let me add a virtual manager?  Is there a way to force the removal of the failed node?  Or to force the addition of a virtual manager?  Any help is appreciated.

3 REPLIES
peyrache
Trusted Contributor

Re: P4300 node failed in 3 node cluster - no quorum

hello

try do add FOM (failover manager)

JY

Bart_Heungens
Honored Contributor
Solution

Re: P4300 node failed in 3 node cluster - no quorum

You only need a FOM or Virtual Manager (for quorum) with 2 or 4 nodes, never for 3 nodes.

 

With 3 nodes you will run 3 Managers on all 3 nodes. If 1 node fails you still have 2 managers running on the 2 other nodes so you have quorum.

So check the 2 remaining nodes and start the manager if it is not running...

--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
My blog: http://blog.bitcon.be
Mike7
Occasional Contributor

Re: P4300 node failed in 3 node cluster - no quorum

Hello - Thank you for the responses, this is fixed now.  I thought they should still have a quorum with two nodes still running, but there wasn't.  Because of that, I was unable to connect to the remaining nodes through CMC to issue commands or see any status, so I had no idea what was going on.  What I ended up doing was going in through the CLI and using the recoverQuorum command.  That reset the quorum to one so that I could at least connect to it.  What had happened was that the manager had also stopped on one of the other two nodes, causing the loss of quorum when the third failed.  So now we're back to a quorum of two in this mgmt group plus a virtual manager.  Good enough for now - they're being retired, so I only need to keep them alive long enough to get the data moved off of them.  Thanks again for your help!