Array Setup and Networking
1753262 Members
4922 Online
108792 Solutions
New Discussion

Re: Network Infrastructure Design for Nimble Arrays

 
Candhade
HPE Pro

Network Infrastructure Design for Nimble Arrays

Network Infrastructure Design for Nimble Arrays

Best if dedicated, redundant switch iSCSI network. Otherwise, use VLANs to keep iSCSI traffic separate.

Network Configuration Choices - Private network or Separate subnet/VLAN.

Network Switch Options

Enable Flow Control - Ability of receiver to slow down a sender to avoid packet loss

Disable Unicast storm control - Switch feature to control storms; must be disabled on SAN ports

Spanning Tree - Switch capability to detect loops in multiple switch configurations. Lengthens time for ports to become usable; should be shortened in SAN (or avoided).

Do not use STP (Spanning Tree Protocol) on switch ports that connect to iSCSI initiators or the Nimble storage array network interfaces. Instead you can use Rapid Spanning Tree.

Use Jumbo Frames - Allow larger packet sizes (~9000 bytes vs. 1500 bytes). It can help improve performance, especially with software initiators

ISCSI Switch Attributes

Good Quality Layer 2 or Layer 3 managed switch

Stacked is preferred, but be aware of issues around stacking such as what happens when the master switch in the stack fails

ISLs at least, concern is under or over specifying the total bandwidth required

Support for Jumbo Frames (with Flow Control) is Desirable

Non-Blocking Backplane - Bandwidth of the backplane >= (# of ports) * (bi-directional port speed)

We recommend that you not set the jumbo frame size on your switch to anything more than 9014 (or 9000) bytes.

Recommend stacking cables if available or if not available then sufficient Aggregated links/ Trunk, ISLs, to support the load. 

Rule of thumb is 1 trunk for each active port within the group.

Recommended ways to connect switches within the SAN network

Switch Stack cables provides very high bandwidth between “stackable” switches, 38 + Gbps speeds.  All stack cables are proprietary and vendor dependent and can only be used with the same vendor switches and in some cases the same vendor switch category so you need to use compatible switches for this to operate.  Not all Stack cables support Gigabit traffic, some stack cables only support management traffic, those stack cables are not counted as a viable ISL option Switch stacking option.

Link aggregation, this is a Standard – 802.3ad and this is a method that allows for multiple vendor of switches can be connected together, known as port trunking.  Provides good bandwidth, but not a true aggregation due to MAC based algorithms.  One of the big downsides is that each trunk port takes available data ports so as you add more and more switches the number of trunk ports used increases and the number of data ports decreases.

10Gbps Ethernet, is a very good interconnection providing high bandwidth in addition to being a standards protocol.  All switches must support 10Gig and in most cases switches use a separate “uplink port” so no data ports are used to support the ISL

Switch Attribute Best Practices

Do not use STP on switch ports that connect to iSCSI initiators or the Nimble storage array network interfaces.

Configure Flow Control on each switch port that handles iSCSI connections. If your application server is using a software iSCSI initiator and NIC combination to handle iSCSI traffic, you must also enable Flow Control on the NICs to obtain the performance benefit.

Disable unicast storm control on each switch that handles iSCSI traffic. However, the use of broadcast and multicast storm control is encouraged.

Enable Jumbo frames to improve storage throughput and reduce latency.  You must have jumbo frames enabled from end to end for them to work correctly

Use the Ping command to test network connectivity and to help determine if jumbo frames are enabled across the network. Example:   vmkping -d -s 8972 x.x.x.x

Note: We need 1 Management IP and 2 Controller Diagnostics IPs for each Array.

Management IP is floating between both the controllers meaning when a controller failover happens, Management IP to connect to the Array remains the same.

Diagnostics IP is configured on individual controller, Diagnostics IPs of both the controllers and Management IP (all 3 IPs) are required for Array heartbeats and DNAs to send to Nimble Support.


I work for HPE

Accept or Kudo

2 REPLIES 2
Ramya_Heera
Frequent Advisor

Re: Network Infrastructure Design for Nimble Arrays

Hello @Candhade ,

This piece of information on Network Infrastructure Design for Nimble Arrays is great.

I m sure it would help a lot of people to know more and understand the same.

Thank you again for sharing the information 

parnassus
Honored Contributor

Re: Network Infrastructure Design for Nimble Arrays

Well, writing about STP (Spanning Tree Protocol) and SAN should be done by carefully specifying that the SAN deployment model will use iSCSI or FCoE, that's to avoid misunderstandings (an usual Fibre Channel SAN deployment has no concept of any Spanning Tree protocol neither about Jumbo frames nor Flow Control mechanisms).

Would be great to link resources specifically written to teach how to deploy HPE Nimbre Storage Array by using HPE Comware NOS based or HPE Aruba ArubaOS-CX NOS based Data Center Networking Switch series - just to stay on the iSCSI/FCoE side - along with HPE ProLiant DL/BL servers.

I start by sponsoring this one: ArubaOS-CX Networking and HPE Nimble with VMware vSphere deployment/interoperability Validated Reference Design Guide (12/2019)

Its purpose is, literally: "The guide provides information on how to build a scalable network infrastructure that connects hosts to Nimble storage to address the business requirements, workloads, and applications required by our customers. The guide describes an architecture that combines HPE Synergy 480 Gen9 Compute Modules, DL360 Gen9 Servers, HPE Nimble Storage array, and Aruba data center switches to reliably deploy and run the virtualized infrastructure. The intended audience for this document is IT administrators and solution architects planning on deploying IP Multicast features.".

Not to speak about deployments with HPE StoreFabric M-Series Ethernet Switches (example here).

[Moderator edit: The above links are no longer valid]


I'm not an HPE Employee
Kudos and Accepted Solution banner