Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

VSA evaluation, handling defective manager in 2-node-configuration

Dirk Trilsbeek
Valued Contributor

VSA evaluation, handling defective manager in 2-node-configuration

Hello everybody,

i'm evaluating Lefthand VSA for our virtual infrastructure using the laptop demo. At the moment i'm trying to simulate a few possible failure scenarios. I have reached a point where i can not reestablish a functional cluster.

Following situation: i have two storage nodes in a management group and a single cluster. One runs manager and virtual manager, one runs manager. The one with the manager now fails - not temporary, we're simulating a destroyed machine. The first node keeps everything available, so far no problem. But when i add a new node as replacement for the defective one, i can't make the management group go back to the state it was before the failure. It still wants to connect to the offline manager on the defective machine. I can't remove the offline machine, i can't delete the virtual manager and i also can't start the manager on the new node.

How would i handle such a situation in a production scenario?
4 REPLIES
teledata
Respected Contributor

Re: VSA evaluation, handling defective manager in 2-node-configuration

I'm assuming you are referring to a production scenario where you lost a node from total loss (corrupt controller/motherboard etc etc) and then got a replacement unit sent to you?

If this is the case you would need to contact support. They can session into the SAN/iQ software (via SSH over a remote session), and they have to remove the "orphaned" node from the management group.

It's not a particularly complex process, but it does require you to involve the product support team.
http://www.tdonline.com
Dirk Trilsbeek
Valued Contributor

Re: VSA evaluation, handling defective manager in 2-node-configuration

seems that you're right. I tried adding a failover manager after stopping one of the nodes, but this isn't allowed when a virtual manager is already running.

I also tested the recommended scenario for two nodes with a failover manager. In that case i was able to add a new node to the group and cluster, remove the old ghost node from the cluster and also from the management group (there is a warning that there could be some problems removing a node from the group when the node is not available). I was also able to start a manager on the new node, so the group is now back to its initial state.
Olvi_1
Frequent Advisor

Re: VSA evaluation, handling defective manager in 2-node-configuration

Hi!

Virtual Manager should be added to Management Group as "spare" but in normal situation it should not be running. And if one node fails you can regain quorum by starting virtual manager on remaining node.

It is better to use FOM if possible. If one node fails you still retain quorum and without servers crashing.

Also you might want to use Exchange Node fuction when removing "ghost node" instead of remove/add.
Dirk Trilsbeek
Valued Contributor

Re: VSA evaluation, handling defective manager in 2-node-configuration

yes, i already read about the "replace node"-feature yesterday and am going to test that too.