Array Setup and Networking
1752852 Members
3868 Online
108790 Solutions
New Discussion

Nimble Group behavior

 
CadenLange
Regular Advisor

Nimble Group behavior

Hi

I'm trying to find out a couple of things when configuring Nimble array groups - perhaps someone can answer for me.

I have two sites and each as 4 x AF40 arrays. Each site will be configured with all 4 arrays in a group with host presentation using FC protocol and the management network used for replication to the group on the second site. I have 4 x mgmt interfaces per arrays (2 on the active controller and 2 on the passive controller) so this gives me a total of 16 x management NICs per array group. When replicating data to the second site are all mgmt NICs used to carry replication traffic giving maximum bandwidth?

Also what is the behavior should the group leader (or any other array) go offline completely? does the entire group fail or do the remaining 3 arrays somehow continue to present volumes.

Does the mgmt IP address only exist on the group leader array or can it float across to other group arrays under certain failure scenario's?

many thanks

Caden

1 REPLY 1
Thomas_Lam_HK
HPE Pro

Re: Nimble Group behavior

Are you configure with multi-array pool (stripe across volume) or single array pool (4 seperate pools in a group)?

Please note that Nimble Array operates in Active/Standby mode, despite you have total 4 managment ports per array, 2 per controller, only one NIC on Active controller could serve Async replication traffic.

In "Repication Partner" configuration;, if using "Use management or controller IPs for replication traffic", controller serving data access on particular pool will use either managment IP (group leader) or diagnostic IP to handle replication traffic.

Our NIC failover mechanism under manamgenet network is also implement as Active/Standby, with means even you have 2 NIC being assigned to management network,  either management IP or diagnostic IP will just bind on single interface and the remaining one will be at standby mode and ready for takeover in case of active NIC port failure (hw fault or cable unplugged).

 

That said, if you are running multi-array pool, volume stripe acrosss arrays, replication traffic throughput will be  aggregated and carried out by all respective arrary active controllers' NIC port bind by either management IP or diagnostic IP. **if it's 4 nodes stripe across pool, total 4 active NIC port will aggregate the  replication througput**

 

Or if you are using single array pool within group, data traffic will be carried out only on the NIC port bind with either management IP or diagnostic IP on that particular array.

 

If you are using single array pool, once the host connectivity is established,  offline group leader will not impact exsiting data access on the survived pool.

 

Group Leader failover is implemented only on Peer Persistence configuration by Automatic Switch Over (ASO) which restrict to maximum 2 arrays per group for now,  In your case with 4 arrays within a group, if Group Leader cannot be recovered for any speical reason, you might need to reach out the Support and they can help seize the GL role to Backup Leader manually for resuming the normal management operations.

 

Thomas Lam - Global Storage Field CTO

 

 

 



Thomas Lam - Global Storage Field CTO


I work for HPEAccept or Kudo