- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- vsa managers and failover
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2011 01:04 PM
08-08-2011 01:04 PM
vsa managers and failover
Hello,
I have a three node vsa cluster with one vsa per physical box. When I take one of the physical boxes offline, I lost connection to the san. My cluster shows that I have 3 of 4 managers running, 3 managers, 1 virtual managers.
Is there something else I need to do to make this redundant?
Thanks,
Dan.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2011 01:30 PM
08-08-2011 01:30 PM
Re: vsa managers and failover
Three VSA (managers) are redundant. You don't need a virtual manager. 4 manager configuration is not recommrnded. You should either have 3 or 5 managers under normal setup. With that said you still should not loose connectivity to the cluster. What errors atre you seeing?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2011 02:59 PM
08-08-2011 02:59 PM
Re: vsa managers and failover
Looks like what happened is:
The quorum communication status for manager 'DC1-VSA2' in management group 'DC1-VSA-1' is 'Down'. Manager 'DC1-VSA2' cannot communicate with managers 'DC1-VSA1, Virtual Manager'.
We took down the DC1-VSA1 host, and the DC1-VSA2 host could not communication with DC1-VSA1 virtual manager. So I have deleted the virtual manager and I will try again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2011 11:46 AM
08-09-2011 11:46 AM
Re: vsa managers and failover
To me ity sounds like you had already started the virtual manager and it was started on 'DC1-VSA1'. When 'DC1-VSA1' went down so did the virtual manager. You have to be careful with the virtual manager. It is merely useful when you have an even number of notes specifically 2 nodes and one of them went down. You can in this case start the virtual manager on the existing node to regain quorum until your physical node comes back up. In a two node environment you can add the virtual manager to the management group but do NOT start it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2011 12:35 PM
08-09-2011 12:35 PM
Re: vsa managers and failover
You definitely want to remove the virtual manager from the management group. If it, and a node are down you have lost quorum. With three nodes each should be a manager, and as long as you don't loose two at once, you will keep quorum, and all will be fine. Oh course, if you have any volumes set as RAID 0, they will go down if you lose a node.