HPE Nimble Storage Solution Specialists

recommended way of cabling for DHCI fault tolerance and performance

 
jchen522
Advisor

recommended way of cabling for DHCI fault tolerance and performance

I am trying to figure what is the best way for cabling for DHCI setup. 

According to the diagram in "HPE Nimble Storage dHCI and VMware vSphere 6.7u Deployment Guide", page 6, HPE ProLiant DL3x0, the "VM network management + vMotion" ports are assigned to one card, and iSCSI1 and iSCSI2 are connected to another card.  The cables are cross connected to each of the switches.

However, in "HPE Nimble Storage DHCI Solution Network Considerations Guide", beginning on page 9, the HPE ProLiant DL compute nodes, the MGMT and iSCSI1 appears to be assigned to one card, and MGMT and iSCSI2 appears to be assigned to another card.  The cables are not cross connected and connect directly to each its perspective switch.  

What is the supported and correct way of cabling?

Current setup is two sites, two servers in each site. Each site has two M-series switches.  Each site has NS HF40.

7 REPLIES 7
MagdalenaS1
Occasional Visitor

Re: recommended way of cabling for DHCI fault tolerance and performance

I don't know how to solve this problem, but I had a similar one and using Firmao helped me

mamatadesaiNim
HPE Blogger

Re: recommended way of cabling for DHCI fault tolerance and performance

It depends on your NimbleOS version.  

During HPE Storage dHCI deployment on an array running release 6.0.0.0 or later, the deployment tool uses ports 1 and 3 for Management. It uses ports 2 and 4 for iSCSI 1 and iSCSI 2.  This is for increased NIC resiliency.

HPE Nimble Storage
jchen522
Advisor

Re: recommended way of cabling for DHCI fault tolerance and performance

Thank you for that info!

Do you know when configuring MLAGs, other than the interconnet between the switches, are there any other MLAGs that I should create toward the Hypervisor hosts or Nimble Storage?

The deployment tool doesn't configure/automate anything on the network switches right?  all MLAG and VLANs port assignments needs to be set before hand correct?

mamatadesaiNim
HPE Blogger

Re: recommended way of cabling for DHCI fault tolerance and performance

Please refer to this doc https://infosight.hpe.com/InfoSight/media/cms/active/public/HPE_Nimble_Storage_dHCI_and_VMware_vSphere_Deployment_Guide_-_Greenfield_Alletra_Deployment.pdf
As far as MLAGs toward the hosts or array, no, you do not, and should not create any.
The deployment tool CAN configure/automate the network switches If using Aruba 8325 or 8360 switches, but it doesn't do "all" of it.

HPE Nimble Storage
jchen522
Advisor

Re: recommended way of cabling for DHCI fault tolerance and performance

Thank you!

I have two sites .  Each site have its own dHCI setup with two hypervisors, two M-series switches and nimble storage array.  When connecting the two sites together, how should the cabling be connected between all 4 switches?

jchen522
Advisor

Re: recommended way of cabling for DHCI fault tolerance and performance

According to the HPE Nimble Storage DHCI solution network considerations guide, page 12, Design 5 - Configuration with peer persistence, it mentioned each switch at site A must have ISLs to it corresponding switch at site B.  What is configuration are these ISLs exactly between the two sites? MLAG? I thought you can only have two switches in a MLAG configuration. Can these ISLs carry other regular data traffics?

jchen522
Advisor

Re: recommended way of cabling for DHCI fault tolerance and performance

I have 4 single mode fibre connections capable between the two sites.  How should they be connected? with what protocol?  Thank you!