StoreVirtual Storage
1752556 Members
4532 Online
108788 Solutions
New Discussion юеВ

HP SV managers behaviour?

 
Dennisvw
Occasional Contributor

HP SV managers behaviour?

Hi,

We have a StoreVirtual 4530 management group with 6 nodes in one cluster and a FOM.
The cluster is evenly balanced between two datacenters.

I'm curious what will happen in case of a datacenter failure, so i started reading some articles about the manager, virtual managers and FOMs but i doesn't give me all the answers.

So let's say DC1 fails, which has 2 managers running.
DC2 is still online with 3 managers running
Question: All volumes still running?

What if DC2 fails, which has 3 managers
DC1 still running with 2 managers
Question:
1. All volumes still running?
2. Will one of the nodes become manager automatically? or do i need to make one node manager by manual action?

Seems to be a simple question, but can't find the answer anywhere

Kind regards,

Dennis

 

 

3 REPLIES 3
Torsten.
Acclaimed Contributor

Re: HP SV managers behaviour?

There are 2 related chapters in the guide:

8 Working with management groups

9 Working with managers and quorum

 

The basic problem for each cluster is to prevent a split brain, so in your situation you would have for example 2 managers in DC1, 2 in DC2 and finally 1 in another location.

In a split brain situation or when a DC dies, both or the surviving DC will try to get the last manager in the third location (or you manage a virtual manager), 3 managers are the quorum and will stay.

 

Read the chapters in the guide for detailed information.


Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Dennisvw
Occasional Contributor

Re: HP SV managers behaviour?

Thanks for the reply.

I understand the quorom requirements, I will try to more dramitize the scenario.
DC1 has 3 nodes (2 of them are manager), DC2 has 3 nodes (2 of them are manager), FOM is located in DC2 on seperated hardware.

DC2 goes down (all cooling systems fail / etc / etc.)
DC1 now has 3 nodes and only 2 managers.
Will the 3rd node become manager automatically to prevent a lock on the storage system?

Kind regards,

Dennis

 

Torsten.
Acclaimed Contributor

Re: HP SV managers behaviour?

The guide says

" Install the Failover Manager on network hardware other than the storage systems
in the SAN. This ensures that the Failover Manager is available for failover and quorum operations
if a storage system in the SAN becomes unavailable."

This assumes FOM is still available if one group is down.

In your scenario the group and the FOM is down, so nobody can get the quorum -> all is down.

Hence the FOM should be in a third location.

In your case manual intervention is needed.

This is based on my understanding of the situation.

 


Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!