- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Nimble Group behavior
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-22-2020 03:58 PM
07-22-2020 03:58 PM
Nimble Group behavior
Hi
I'm trying to find out a couple of things when configuring Nimble array groups - perhaps someone can answer for me.
I have two sites and each as 4 x AF40 arrays. Each site will be configured with all 4 arrays in a group with host presentation using FC protocol and the management network used for replication to the group on the second site. I have 4 x mgmt interfaces per arrays (2 on the active controller and 2 on the passive controller) so this gives me a total of 16 x management NICs per array group. When replicating data to the second site are all mgmt NICs used to carry replication traffic giving maximum bandwidth?
Also what is the behavior should the group leader (or any other array) go offline completely? does the entire group fail or do the remaining 3 arrays somehow continue to present volumes.
Does the mgmt IP address only exist on the group leader array or can it float across to other group arrays under certain failure scenario's?
many thanks
Caden
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2020 07:55 AM - edited 07-24-2020 07:56 AM
07-24-2020 07:55 AM - edited 07-24-2020 07:56 AM
Re: Nimble Group behavior
Are you configure with multi-array pool (stripe across volume) or single array pool (4 seperate pools in a group)?
Please note that Nimble Array operates in Active/Standby mode, despite you have total 4 managment ports per array, 2 per controller, only one NIC on Active controller could serve Async replication traffic.
In "Repication Partner" configuration;, if using "Use management or controller IPs for replication traffic", controller serving data access on particular pool will use either managment IP (group leader) or diagnostic IP to handle replication traffic.
Our NIC failover mechanism under manamgenet network is also implement as Active/Standby, with means even you have 2 NIC being assigned to management network, either management IP or diagnostic IP will just bind on single interface and the remaining one will be at standby mode and ready for takeover in case of active NIC port failure (hw fault or cable unplugged).
That said, if you are running multi-array pool, volume stripe acrosss arrays, replication traffic throughput will be aggregated and carried out by all respective arrary active controllers' NIC port bind by either management IP or diagnostic IP. **if it's 4 nodes stripe across pool, total 4 active NIC port will aggregate the replication througput**
Or if you are using single array pool within group, data traffic will be carried out only on the NIC port bind with either management IP or diagnostic IP on that particular array.
If you are using single array pool, once the host connectivity is established, offline group leader will not impact exsiting data access on the survived pool.
Group Leader failover is implemented only on Peer Persistence configuration by Automatic Switch Over (ASO) which restrict to maximum 2 arrays per group for now, In your case with 4 arrays within a group, if Group Leader cannot be recovered for any speical reason, you might need to reach out the Support and they can help seize the GL role to Backup Leader manually for resuming the normal management operations.
Thomas Lam - Global Storage Field CTO
Thomas Lam - Global Storage Field CTO
I work for HPE