Aruba & ProVision-based
1747997 Members
4570 Online
108756 Solutions
New Discussion юеВ

Re: Connecting 2nd Cabinet in Data Center

 
MikeJeezy
Advisor

Connecting 2nd Cabinet in Data Center

Hi, we currently use two 2910al-48G switches in our current full cabinet at our data center.  They are in a redundant config trunked using the 10G module.  We are going to add a 2nd cabinet that will contain our data warehouse (Vertica) servers and need to connect the two cabinets.  I'm not exactly sure how to proceed.

 

Do you recommend purchasing 2 additional 2910al switches with fiber cross-connects?  Or do would we need an additional "top-of-rack" switch in both cabinets to handle the communications.  I estimate we will need 3G of bandwidth between cabinets.  Any help is appreciated.  Thanks

13 REPLIES 13
paulgear
Esteemed Contributor

Re: Connecting 2nd Cabinet in Data Center

Hi MikeJeezy,

If you want to have full redundancy within and between both cabinets but don't need more than 3 Gbps bandwidth between them, probably the easiest thing to do is buy 2 new switches, put 2 x 10 Gbps cards in each switch, and connect them in a loop. Set up spanning tree priorities correctly and each switch will be 10 Gbps connected to the others, with one link blocked.

If you end up needing more bandwidth than 10 Gbps between the two cabinets, the next step would be to join them with an LACP trunk of 10 Gbps links, but this sacrifices full redundancy because (as far as i am aware) the 2910s don't support distributed trunking.  (I'm not sure if the 2910 can have more than 2 x 10 Gbps, but if so, then this could compensate for the full redundancy issue.)

If i were doing this from scratch, i'd be using 5500s or 5800s and connecting them with an IRF loop, which is similar to distributed trunking.

Regards,
Paul
Matcol
Frequent Advisor

Re: Connecting 2nd Cabinet in Data Center

A "ring" topology sounds good.

Get a quote for 5500s and 5800s and see if they are competitive with the older switches you have been buying. If you can afford it, start buying the new switches. They have more 10Gb ports, plus you can join them all together using IRF, as Paul says, which is very useful.

MikeJeezy
Advisor

Re: Connecting 2nd Cabinet in Data Center

I see, so you guys are suggesting I maybe look at one of the following (below).  If I only had the budget available for 2 of these, could I put one 2910 and one 5500 in each cabinet and still keep redundancy within each cabinet?  I could still connect the cabinets via fiber 10G but with a single point of failure - correct?  Thanks.

 

5500-48G EI Switch with 2 Interface Slots (JD375A)
5800-48G Switch (JC105A)

 

MikeJeezy
Advisor

Re: Connecting 2nd Cabinet in Data Center

By the way, the 2910's have four optional 10-Gigabit ports (CX4 and/or SFP+) as far as I can tell.  Are you saying I "could" use these for 10G connectivity between cabinets if I wanted, but the 5500/5800 would be better in the long run?  Thank you.

 

I am using one of the two available slots on the back for 10G uplink between switches using the Interconnect kit: http://h30094.www3.hp.com/product/sku/3983975

 

Quickspecs are here: http://h18000.www1.hp.com/products/quickspecs/13280_na/13280_na.HTML

 

Thank you for your help.  I'm just trying to figure out what to order and I will hire someone to configure it. 

paulgear
Esteemed Contributor

Re: Connecting 2nd Cabinet in Data Center

Hi MikeJeezy,

The interconnect module you linked to is a 1-port 10 GbE CX4. The 5500 series has a 2-port 10-GbE CX4 (i can dig up an exact part number if you need it). So you can have a maximum of 2 x 10 GbE on the 2910 using that module, whereas you can have 4 x 10 GbE on the 5500 using the 2-port modules. (I don't know whether there's an equivalent 2-port module for the 2910s.)

The 2910 doesn't do IRF, so i don't see a lot of value in putting one 2910 and one 5500 in each cabinet. If you don't have the scope to change all switches to 5500s, then i would leave the 2 x 2910s as-is, put 5500s with 2 x 2-port 10 GbE modules using IRF in the new cabinet, and ask for money in your next funding round to get the 2910s replaced with 5500s. Then i'd connect the 2910s to each other and to the 5500s with one link each (non-IRF links - so one would always be blocked with STP), and link the 5500s with the remaining 2 x 10 GbE. That would give you 10 Gbps between all switches, and 20 Gbps between the 5500s, as well as the ability to do distributed trunking between the 5500s.

Please note this is just a suggestion based on what you've told us about your environment and should not be taken as professional advice. :-) There might be cabling limitations or various other caveats that mean this is not appropriate for your network. Your HP sales rep should be able to advise better.
Regards,
Paul
MikeJeezy
Advisor

Re: Connecting 2nd Cabinet in Data Center

Hi paulgear, thank you for the response.  I like your suggestion.  The 2nd cabinet we have secured is directly across the aisle so we would have to run the cable under the floor, but I think a 10m cable would suffice.  Sorry, I'm not familar with IRF and STP so I can't comment on that (in a little over my head yes :-)

 

1. Is it not possible to use the fiber ports on the front to connect the 2910's to the 5500's?

2. Are you available for [remote] paid consulting?

 

Michael

paulgear
Esteemed Contributor

Re: Connecting 2nd Cabinet in Data Center

Hi Michael,

 

  1. The 2910-48G has 4 x 1 Gbps SFPs.  So you could use those to connect to the 5500s.  But keep in mind they are 1 Gbps only, so the best bandwidth you could expect is a 4 Gbps LACP trunk.
  2. Yes, but you probably don't want that. ;-)  Much better to get someone who can come on site and help you.
Regards,
Paul
paulgear
Esteemed Contributor

Re: Connecting 2nd Cabinet in Data Center

I forgot to mention: If you had 8 x 1 Gbps SFPs then you could make a cross-connect and have 4 Gbps out of each 2910 going to the 5500 IRF stack. But if you can get 10 m cables for the CX4 cards (i'm not an expert on 10 Gbps cabling) then that would be a lot faster and cleaner.
Regards,
Paul
John Gelten
Regular Advisor

Re: Connecting 2nd Cabinet in Data Center

As for the 10m cable-length: one of my customers is running a network with mainly HP-5400zl switches, interconnected by quite a lot CX4-cables of 15m length. (the distance between some switches is exactly 14.90m - including cutting some corners) They don't have any issues (well, at least not with the cabling ;- )

So CX4 would probably be your least expensive option, IF the datacenter allows you to run this type of cabling, and the length stays below 15m (most datacenters follow pre-installed ducts, adding a lot of extra length to you cabling). For CX4, 15m really is the maximum cable-length.

 

If you want to mix the 5500 and your existing switches, keep in mind the CLI of both types is very different, making configuration of it all a bit more difficult compared to all switches the same type. The 2910 are from the ProCurve family, the 5500 are from the H3C family of switches. Both are good choices in my opinion, but mixing them might not be your best option. Especially if you are doing it yourself, and configuring switches is not your daily routine.

 

Since (afaik) you can put two 2x10G modules in the back of each 2910, you can make a ring of four 2910 switches, connecting every switch with two other switches by a trunk of two 10G links (distributed over two modules for maximum redundancy). Run spanning-tree to solve the loop, and you have a setup with quite some redundancy and 20Gbps bandwidth. And probably not too big an investment in new switches.

 

Having four 5500 switches would give you two stacks of two switches. Both switches within one stack (in one rack) are interconnected by a special stacking-cable, giving a high-troughput 'backplane' connectivity between them. This is great for performance, and for management because they form one logical switch with 2x48 interfaces. But if the switch crashes, most of the time all switches in a stack crash at the same time because logically, they have formed one switch. In my experience most crashes are sortware-crashes, and they generally crash the whole stack; issues related to the PSU are the most important issues that crash only one of the stack-members. But then again: how often does a switch (in a datacenter) crash at all...

 

It depends on your environment whether you benefit more from the stacked approach with the 5500 (higher bandwitdth, less complex to manage because you have less logical switches) or the individual approach with the 2910 (bandwidth limited to 2x10Gbps, four individual switches to manage, one switch can crash - the others take over, you need spanning tree for that).

 

If you mix your current switches with two 5500 switches, you get just a bit of the positive and most of the negative from the above... So that should really be an interim-solution, if you ask me.