- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: P4300 node failed in 3 node cluster - no quoru...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2017 07:15 AM
01-27-2017 07:15 AM
Hello - We have a cluster of 3 legacy P4300's running SAN/iQ 9.5. One failed, and now I can't do anything with the cluster because there is no quorum. I tried to add a virtual manager, but I get the message "You can only add a virtual manager when doing so will not cause a loss of quorum". Any attempts to remove the failed node results in "Could not send the command because there is no quorum...". Any suggestions on what else to try? How can I establish quorum if it won't let me add a virtual manager? Is there a way to force the removal of the failed node? Or to force the addition of a virtual manager? Any help is appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2017 05:12 AM
01-30-2017 05:12 AM
Re: P4300 node failed in 3 node cluster - no quorum
hello
try do add FOM (failover manager)
JY
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2017 08:53 AM
01-30-2017 08:53 AM
SolutionYou only need a FOM or Virtual Manager (for quorum) with 2 or 4 nodes, never for 3 nodes.
With 3 nodes you will run 3 Managers on all 3 nodes. If 1 node fails you still have 2 managers running on the 2 other nodes so you have quorum.
So check the 2 remaining nodes and start the manager if it is not running...
If my post was useful, clik on my KUDOS! "White Star" !
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-30-2017 11:54 AM
01-30-2017 11:54 AM
Re: P4300 node failed in 3 node cluster - no quorum
Hello - Thank you for the responses, this is fixed now. I thought they should still have a quorum with two nodes still running, but there wasn't. Because of that, I was unable to connect to the remaining nodes through CMC to issue commands or see any status, so I had no idea what was going on. What I ended up doing was going in through the CLI and using the recoverQuorum command. That reset the quorum to one so that I could at least connect to it. What had happened was that the manager had also stopped on one of the other two nodes, causing the loss of quorum when the third failed. So now we're back to a quorum of two in this mgmt group plus a virtual manager. Good enough for now - they're being retired, so I only need to keep them alive long enough to get the data moved off of them. Thanks again for your help!