HPE Aruba Networking & ProVision-based
1833372 Members
3275 Online
110052 Solutions
New Discussion

Storageworks P4500 and switching design advice

 
mhuxtable3
New Member

Storageworks P4500 and switching design advice

We are in the latter stages of finalising a design for iSCSI based on the P4500. This will be used as the central storage for a cluster of 6 of our Hyper-V hosts. The BQ888A model (2 box 14.4TB total capacity) is the favoured one for this project.

 

I have some questions about the iSCSI switching infrastructure, though. The servers in these racks are fed by an 8200zl series (with redundant fabric/management module etc). However, adding iSCSI traffic to this switch has a number of issues, not least the fact it will be overfilled with all the iSCSI links. There is also some concern over the administration of this particular switch, which makes us uneasy to run critical SAN data over; sadly, this issue is beyond my control.

 

So - we will need to purchase 2 new, dedicated switches dedicated to the iSCSI network (and probably also the cluster & live migration networks from each Hyper-V host in the cluster).

 

After reading through http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA2-5615ENW.pdf I am planning to purchase two 2910al-48G switches for redundancy. I will also purchase the 10GbE module for these switches and for each box in the P4500 to increase the throughput. Finally, I was planning to purchase the 2900's 10GbE CX4 interconnect kit to link the two switches together.

 

Each box in the P4500 cluster will have one 10GbE SFP+ line leaving it, with one box connected to one switch and the other to the other switch. I was also tempted to add a 1Gb copper connection to the alternative switch (which the document confirms is suported in an active/passive configuration). This is to ensure both boxes have a route into the iSCSI network should a switch or fibre line fail, and to allow us to have at least some degree of redundancy in the event of a failure, even at the loss of bandwidth.

 

Each Hyper-V host will have additional NICs added with multiple copper lines connected to each switch (initially, 2 x 1Gbps lines into each switch to give 4Gbps connections to each host overall). Again, these are cross-wired to ensure a switch failure doesn't take a whole host down.

 

I understand I need to install the HP DSM module onto the Hyper-V hosts for the MPIO to work correctly. However, I am unclear on a few things:

 

  • Is the 2910al's CX4 interconnection kit going to form a stack between the switches? Or, is this just a very high speed port trunk between switches (the equivalent to running 10 x 1Gbps copper lines and setting them in LACP or something?)

  • Leaving each box in the P4500, can I connect the 1Gbps copper line to the OTHER switch to the switch the 10GbE line is connected to? Will this achieve active/passive despite being different switches?

  • Will the trunk between switches be sufficient if it does not actually implement a stack? Since the ports are in MPIO and not using teaming for iSCSI, my understanding is that spreading the connections across the switches in this fashion is not a problem, but I am not 100% sure.
     
  • Finally, in the future, suppose I want to add another two 10GbE SFP+ connections into each box in the P4500, so that I use all four of the 10GbE ports. We do not need this bandwidth now, but might later. Can I connect each 10GbE fibre to different switches in this configuration? Would this then be active/active?

Could someone confirm/deny whether this will work as I intend it to, with all the redundancy I am after?

 

Many thanks! :)