- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: VSA evaluation, handling defective manager in ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 05:45 AM
тАО01-18-2010 05:45 AM
VSA evaluation, handling defective manager in 2-node-configuration
i'm evaluating Lefthand VSA for our virtual infrastructure using the laptop demo. At the moment i'm trying to simulate a few possible failure scenarios. I have reached a point where i can not reestablish a functional cluster.
Following situation: i have two storage nodes in a management group and a single cluster. One runs manager and virtual manager, one runs manager. The one with the manager now fails - not temporary, we're simulating a destroyed machine. The first node keeps everything available, so far no problem. But when i add a new node as replacement for the defective one, i can't make the management group go back to the state it was before the failure. It still wants to connect to the offline manager on the defective machine. I can't remove the offline machine, i can't delete the virtual manager and i also can't start the manager on the new node.
How would i handle such a situation in a production scenario?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 06:51 PM
тАО01-18-2010 06:51 PM
Re: VSA evaluation, handling defective manager in 2-node-configuration
If this is the case you would need to contact support. They can session into the SAN/iQ software (via SSH over a remote session), and they have to remove the "orphaned" node from the management group.
It's not a particularly complex process, but it does require you to involve the product support team.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2010 11:56 PM
тАО01-18-2010 11:56 PM
Re: VSA evaluation, handling defective manager in 2-node-configuration
I also tested the recommended scenario for two nodes with a failover manager. In that case i was able to add a new node to the group and cluster, remove the old ghost node from the cluster and also from the management group (there is a warning that there could be some problems removing a node from the group when the node is not available). I was also able to start a manager on the new node, so the group is now back to its initial state.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-20-2010 11:46 PM
тАО01-20-2010 11:46 PM
Re: VSA evaluation, handling defective manager in 2-node-configuration
Virtual Manager should be added to Management Group as "spare" but in normal situation it should not be running. And if one node fails you can regain quorum by starting virtual manager on remaining node.
It is better to use FOM if possible. If one node fails you still retain quorum and without servers crashing.
Also you might want to use Exchange Node fuction when removing "ghost node" instead of remove/add.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-20-2010 11:50 PM
тАО01-20-2010 11:50 PM