HPE Aruba Networking & ProVision-based
1832952 Members
2679 Online
110048 Solutions
New Discussion

procurve 2910al-48G interconnect with fail-over / uplinks & trunks

 
hp_neuling
Visitor

procurve 2910al-48G interconnect with fail-over / uplinks & trunks

Hello Community !:-)

 

one ask about how can i connect all switchs together.
at moment, 8 switchs distributed at 4 builds.

Scene:


4 Builds : (A B C & D) with 2 Switchs pro build interconnected with interconnect kits.
(A1 <-> A2)
(B1 <-> B2)
(C1 <-> C2)
(D1 <-> D2)


now i need to connect ...
A1 <-> B1
B1 <-> C1
C1 <-> D1

and ...

A2 <-> B2
B2 <-> C2
C2 <-> D2


How can I configure the Uplinks Ports ?
LACP Active? / Trunking? / Trunking mit LACP?

Thanks a lot

 

8 REPLIES 8
paulgear
Esteemed Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

If i understand you correctly, you want to have loops between all of your buildings, so that any single switch can go down and you still have connectivity between all remaining switches.

 

There are several ways you can do this:

  1. Distributed LACP trunks. According to my copy of the switch features matrix, this is not available on E29xx series.  (You need E3500, E5400, or similar.)
  2. Use normal rapid spanning tree (RSTP), set the switch priorities appropriately, and bear the fact that your 2nd link will always be blocked unless your primary link fails.
  3. Use multiple spanning tree (MSTP) and set up an appropriate MSTP configuration.  Then you will have some VLANs using one link, and some using the other.  If you don't have a multi-VLAN setup, then this will be no different to option 2.

Hope that makes sense.

Regards,
Paul
Arimo
Respected Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

Distributed Trunking isn't an applicable solution, it's currently only available for K-platform switches (3500, 5400 etc) and only if you want to connect one server to 2 switches.

 

If you want to have multiple links running between 2 switches, you have basically only 2 options:

 

1. Configure the 2 links as a trunk, either LACP or HP Trunk. Both links will be passing traffic, and in case one of them goes down, the other one will continue normally.

2. Configure STP throughout the network. Some links will be blocked until primary link goes down.

 

Instructions for both are in the switch Management and Configuration Guide.


HTH,

Arimo
HPE Networking Engineer
paulgear
Esteemed Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

@Arimo: I know distributed trunking is not relevant due to the switch model, but why does it matter whether the device connecting to the distributed trunking switches is a server or another (single) switch? The setup for a server to a distributed trunk is no different to a standard LACP trunk, so theoretically you should be able to have a single switch set up to LACP trunk to pair of distributed trunking switches, correct?

(@hp_neuling: please note that this is a diversion from your answer - it's just to satisfy my curiosity.)
Regards,
Paul
Arimo
Respected Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

802.1AX (prev .3ad) requires that the links in the trunk are co-terminating. DT is a proprietary solution, not real LACP. K.15.05 brings also switch-to-switch trunking - but it requires that all switches are running the same SW version.


HTH,

Arimo
HPE Networking Engineer
Matt221177
Occasional Visitor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

I believe a colleague of mine made a mistake purchasing the 2910al series switches (please correct me if I am wrong).

 

I am actually a Cisco guru and am very comfortable configuring link aggregation split between two switches in a stacked configuration (or simply running ether-channel between two non-stacked switches).  This works great  for load-balancing and failover.  If I have 4 NICs, I simply run two to each switch and am able to utilize all 4in tandem.

 

If I understand this post correctly, I should have 3 NICs connected to one switch for performance and 1 NIC connected to another switch in standby mode?

 

What is an optimal configuration for DL380 G7 servers with multiple NICs connecting to 2910al models?  I was going to use 2 NICs for management, 2 NICs for vMotion, 2NICs for iSCSI and 4 for VMs.  Each team would have their NICs split evenly between the two switches.  See the attachment for an example of the configuration we were hoping to achieve...  By all means feel free to criticize the he__ out of this.

 

Please help!

 

Thank You.

 

Antonio Milanese
Trusted Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

@Matt

>I believe a colleague of mine made a mistake purchasing the 2910a

well this depends on the budget available, the 2910a it's good compromise in terms of price/features
for a 1GbE/10GbE switch with a decent amount of packet buffers and backplane speed.
Yes it lacks "advanced" resilency/ha features but it's not price comparable with cisco platform gears that
are VSS (6500), vPC(Nexus), or even Stackwise (3750-X) capable.
with a good badget I would have bought another model with IRF support or now that's available the E3800 that have respectively a VSS / Stackwise capabilities.

About the contingent problem Arimo is correct and the best approach is to use a combination of MSTP and LACP to build a ring between switches (each link with a 2 port LACP trunk) and use 2 MSTP instances to load balance traffic based on vlans (vMotion,iSCSI,Vm Traffic); in respect to your case ESX does not support LACP (at least w/o Nexus 1000) but only static trunks and so loadbalancing multi pNic per vSwitch it's a bit tricky (ip hash routing) and not that effective 'coz:

a) if you have a single iSCSI portal w/o lgoin redirect multi initiator vKernel ports with MPIO it's a lot better
b) only vSphere 5 has capability to use multi nic vMotion


Best Regards,

Antonio

Matt221177
Occasional Visitor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

Thank You Antonio.

 

Here are a few more questions I picked up along the way.

 

1.  He is using a interconnect kit between his two 2910al switches.  Each switch has a single gig link running to an ASA5510 that will be used for inter-vlan routing.  Spanning tree will block one of the switchports.  How do i configure the interconnect kit between the switches to act like a Cisco trunk passing all vlan traffic?

 

2.  Due to the fact that these switches cannot run VRRP, do you have a better suggestion to routing inter-vlan traffic?  I would like to use the routing capabilities within the switches, but in his case, I would have to choose one switch as the default-gateway placing single point of failure on the network switches.  In my current plan, it will take the failure of the ASA to take down inter-vlan communications.  If a switch goes down, the ESXi hosts will failover to the other and still be able to route via the ASA.

 

Thank You!

 

P.S.  I attached our storage network diagram as it currently stands.

Antonio Milanese
Trusted Contributor

Re: procurve 2910al-48G interconnect with fail-over / uplinks & trunks

Hello,

 

Yes no VRRP no party..and yes ASA it's a better Spof =)
you need a full-fledged router (or the ASA5510) terminating your VMs VLANS (but 2900al is dhcp relay/ip helper capable)..

Just to add a quick note to my previous message:
LACP was mentioned and intended as PtP link between switches; LACP on ESX side is "useless" with regards to iSCSI due to the teaming policy used (IP hash) by vSphere and even TLB (btw dSwitch enterprise feat.) determine which uplink to use but will not actively aggregate links nor evenly balancing the traffic.
LACP is less effective on VNX side either due to same internal hash logic limitations, so the best idea here is the simpler...

multiple target/multiple ports (vlans)

iSCSI A ip0/subnetA -> SPa(pt2) -> sw1(VLANA) -> esx1
iSCSI A ip0/subnetB -> SPa(pt3) -> sw2(VLANB) -> esx2
iSCSI B ip1/subnetA -> SPb(pt2) -> sw1(VLANA) -> esx2
iSCSI B ip1/subnetB -> SPb(pt3) -> sw2(VLANB) -> esx1

laveraging the MPIO framework since vSphere will see each IP address as a different path to the SAME LUN and let you manage failover as well round robin policy. Then using MSTP you can "build" a MSTI where the active iSCSI/vMotion VLANs are using the interconnect link as primary FWD link and another MSTI for VMs "regular" traffic VLANs using one of the links to ASA

For trunking,as per Cisco jargon,just add the interconnect ports on both sides as tagged members for every VLAN you want to carry (the "access" vlan is the untagged one = (native) vlan 1 by default)

Regards,

Antonio