- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- HPE Nimble Storage Solution Specialists
- >
- recommended way of cabling for DHCI fault toleranc...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2022 12:14 AM - last edited on 01-12-2022 05:58 AM by support_s
01-08-2022 12:14 AM - last edited on 01-12-2022 05:58 AM by support_s
recommended way of cabling for DHCI fault tolerance and performance
I am trying to figure what is the best way for cabling for DHCI setup.
According to the diagram in "HPE Nimble Storage dHCI and VMware vSphere 6.7u Deployment Guide", page 6, HPE ProLiant DL3x0, the "VM network management + vMotion" ports are assigned to one card, and iSCSI1 and iSCSI2 are connected to another card. The cables are cross connected to each of the switches.
However, in "HPE Nimble Storage DHCI Solution Network Considerations Guide", beginning on page 9, the HPE ProLiant DL compute nodes, the MGMT and iSCSI1 appears to be assigned to one card, and MGMT and iSCSI2 appears to be assigned to another card. The cables are not cross connected and connect directly to each its perspective switch.
What is the supported and correct way of cabling?
Current setup is two sites, two servers in each site. Each site has two M-series switches. Each site has NS HF40.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 03:05 AM
01-10-2022 03:05 AM
Re: recommended way of cabling for DHCI fault tolerance and performance
I don't know how to solve this problem, but I had a similar one and using Firmao helped me
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 10:52 AM
01-10-2022 10:52 AM
Re: recommended way of cabling for DHCI fault tolerance and performance
It depends on your NimbleOS version.
During HPE Storage dHCI deployment on an array running release 6.0.0.0 or later, the deployment tool uses ports 1 and 3 for Management. It uses ports 2 and 4 for iSCSI 1 and iSCSI 2. This is for increased NIC resiliency.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 01:12 PM
01-10-2022 01:12 PM
Re: recommended way of cabling for DHCI fault tolerance and performance
Thank you for that info!
Do you know when configuring MLAGs, other than the interconnet between the switches, are there any other MLAGs that I should create toward the Hypervisor hosts or Nimble Storage?
The deployment tool doesn't configure/automate anything on the network switches right? all MLAG and VLANs port assignments needs to be set before hand correct?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 03:36 PM
01-10-2022 03:36 PM
Re: recommended way of cabling for DHCI fault tolerance and performance
Please refer to this doc https://infosight.hpe.com/InfoSight/media/cms/active/public/HPE_Nimble_Storage_dHCI_and_VMware_vSphere_Deployment_Guide_-_Greenfield_Alletra_Deployment.pdf
As far as MLAGs toward the hosts or array, no, you do not, and should not create any.
The deployment tool CAN configure/automate the network switches If using Aruba 8325 or 8360 switches, but it doesn't do "all" of it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 05:32 PM
01-10-2022 05:32 PM
Re: recommended way of cabling for DHCI fault tolerance and performance
Thank you!
I have two sites . Each site have its own dHCI setup with two hypervisors, two M-series switches and nimble storage array. When connecting the two sites together, how should the cabling be connected between all 4 switches?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 07:10 PM
01-10-2022 07:10 PM
Re: recommended way of cabling for DHCI fault tolerance and performance
According to the HPE Nimble Storage DHCI solution network considerations guide, page 12, Design 5 - Configuration with peer persistence, it mentioned each switch at site A must have ISLs to it corresponding switch at site B. What is configuration are these ISLs exactly between the two sites? MLAG? I thought you can only have two switches in a MLAG configuration. Can these ISLs carry other regular data traffics?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2022 11:42 PM
01-10-2022 11:42 PM
Re: recommended way of cabling for DHCI fault tolerance and performance
I have 4 single mode fibre connections capable between the two sites. How should they be connected? with what protocol? Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-29-2023 10:43 AM - last edited on 04-11-2023 03:16 AM by Sunitha_Mod
03-29-2023 10:43 AM - last edited on 04-11-2023 03:16 AM by Sunitha_Mod
Re: recommended way of cabling for DHCI fault tolerance and performance
Hi
I have the same situation (two sites, each with 1 x Alletra 5030 + 2 x switches + 3 x servers) and I can not find the document specified here
Is it available at some different link ?
Thank you
Costin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2023 08:58 AM
04-11-2023 08:58 AM
Re: recommended way of cabling for DHCI fault tolerance and performance
The current deployment guide for New Installations can be found here:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-11-2023 10:34 AM
04-11-2023 10:34 AM
Re: recommended way of cabling for DHCI fault tolerance and performance
Regarding Peer Persistence between two dHCI sites: Right, there must be ISLs between the two sites per the Network Considerations Guide. Those ISLs will carry data and management traffic between the two dHCI Clusters, and must be in the same layer 2 subnet, as outlined in the HPE Nimble Storage dHCI Solution Network Considerations Guide.
As stated in the Inter-switch links section on page 5 of that document, HPE recommends "you include at least two ISLs per switch pair at the highest possible speed to create redundancy and minimize latency". Hence your single-mode connections meet this recommendation.
Best Practice would be that these ISL interfaces be aggregated together in a multi-chassis link aggregation group (MLAG) running link aggregation control protocol (LACP) in a port-channel. Note that this MLAG port-channel between the two sites is not the same as the MLAG-IPL between the two switches in each dHCI cluster. The MLAG-IPL essentially makes the switch pair at each dHCI cluster look like one logical switch, whereas the MLAG port-channel bonds a group of ISLs together and runs LACP, combining multiple physical interfaces into a single logical connection, a port channel.