Aruba & ProVision-based
1752826 Members
3870 Online
108789 Solutions
New Discussion юеВ

Re: LACP Design Questions

 
CFPiazza
Occasional Contributor

LACP Design Questions

Right now my boss has one core Procurve 2810-48G, we'll name this switch Bonnie, set with a single link spread to ten other switches, pretty much a hub and spoke. I had the idea of adding a second switch, well name this one Clyde, to the set with four lacp links between Bonnie and Clyde, each with a link between all ten switches, providing redundancy.

We have spanning tree enabled on all switches with priority 0 on Bonnie and 1 on Clyde.

I have the four cables between Bonnie and Clyde on ports 1-4, with lacp active on Bonnie and passive on Clyde.

In my head this all makes sense but my boss doesn't understand the need for this and he pulled Clyde because he thinks all it's doing is adding more needless stress on Bonnie.

Here are my questions

What are the flaws with my design vs his?

What is lacp really doing between those four links? I read everywhere they're supposed to be load balancing, combining four 1 gig connections to four but this boss doesn't believe me, and if I'm right, which show command do I use to prove it?

 

5 REPLIES 5
parnassus
Honored Contributor

Re: LACP Design Questions

Nothing bad with that design...but, if the configuration is limited to what you described, it looks like pretty useless (but not for the reasons your Boss has)...I mean: you interlinked Bonnie and Clyde with a Port Trunking Group (using LACP), that's OK...but, at the same time, since each host is concurrently connected to Bonnie and to Clyde...each host exchanges traffic through just one (active) link at time because (and thanks to) (R)STP shuts down the other link (blocked) of the pair to avoid those links to form a nasty loop (Consider that that host, call it Buck, has two NIC Ports and it creates a loop when it is linked to Bonnie via Bonnie port n, Bonnie is linked to Clyde through the Port Trunking Group, Clyde is linked to Buck through Clyde port m <- STP jumps in as fast as it can - if enabled - and solves/prevent the loop by blocking, at access level, access port n - on Bonnie - or access port m - on Clyde...de facto cutting down Buck links pair to just a single link [*]).

The design you created could be a way to have a sort of link redundancy to "core" where "core" is the pair of Switches Bonnie and Clyde interlinked together...but that makes sense if the "core" is configured to provide some sort of service (switching only? routing? VRRP?) to the hosts (access layer) connected to it and that happen correctly if you use technologies such as Distributed Trunking (dt-lacp) with ISC link (and keepalive link too) between Bonnie and Clyde, things you didn't with that simple Port Trunking OR if you can create a single virtual switch by front-/back-plane stacking Bonnie with Clyde if they support such feature (they slept together for sure! ;-) )...with those technologies you create and use link redudancy with more efficiency (front-/back-plane stacking doesn't require Distributed Trunking dt-lacp, it is sufficient a normal LACP Port Trunking against both Bonnie and Clyde since Bonnie and Clyde end to be seen as two separate logical entities, they are seen as a single big switch).

Since you have practically all hosts connected to both Bonnie and Clyde...can you figure out what hosts are active to Bonnie and what others are active with Clyde...to justify Port Trunking traffic between Bonnie and Clyde (through the LAG LACP)?

I mean: if all hosts have the active link to Bonnie and the blocked one to Clyde...Clyde doesn't see any traffic and so the Port Trunking between Bonnie and Clyde is practically unused AFAIK.

Regarding the traffic load balancing (4x1Gbps ports that form a Link Aggregation Group, AKA, Port Trunking): it all depends on the traffic type and the selected load sharing algorithm; if you have many hosts on Bonnie that speak to many different hosts on Clyde then the load sharing algorithm distributes the traffic very well across the four aggregated physical links (so the "aggregated" bandwidth between Bonnie and Clyde grows up to 4Gbps Full Duplex) but, remember, each physical link of the Port Trunking Group still carries 1Gbps Full Duplex...each single traffic session uses just one single link (can be "distributed"/"diveded" on more links concurrently), this means that if you have Host A on Bonnie which speak to Host B on Clyde...basically all of its traffic goes through a single link of the four available.

[*] Suppose (R)STP blocks all and only access ports on Clyde (or on Bonnie) then you end up with a seized Network Topology: only Bonnie (or Clyde) has active access ports carrying traffic of your connected hosts so, on the other side, Clyde (or Bonnie) has - or see - no access traffic at all (at least for all ports involved in the blockage)...that's a scenario...not considering also that (R)STP interacts with the Trunking logical port between Bonnie and Clyde too so it can seize it at convenience - if the blocking doesn't happen at level of each single access involved port - protecting the whole network from a broadcasting storm caused by loop(s). Hope not to be wrong here.


I'm not an HPE Employee
Kudos and Accepted Solution banner
Vince-Whirlwind
Honored Contributor

Re: LACP Design Questions

Your design is fine.

Your 4-member LACP link does *not* load balance. All 4 links might pass traffic, but do not call it "load balancing".

Did you enable spanning-tree on your 10 other switches?
--> Should be left on default spanning-tree priority
These switches will put their uplink to Clyde into blocking mode.

You need to think about your traffic flows - where are the packets going?
With 1 Gb links, you have some fairly constrained bottlenecks:
--> Each Access switch uplink is 1Gb bandwidth for a potential of 48x 1Gb traffic flows.
--> Your Bonnie<-->Clyde link is between 1Gb and 4Gb bandwidth for a potential of 10x switches, ?x ports(?) x 100Mb/1Gb(?) traffic flows.

parnassus
Honored Contributor

Re: LACP Design Questions

Ops...my bad...I erroneously used the word "hosts" instead of "Access Switches" (I totally forgot you had 10 Switches directly connected with, each one, dual uplinks to your Bonnie and Clyde pair)...basically the picture I drew should still be quite valid (so please find and replace the word "hosts" with "access switches" and you're done)...even if, as @Vince-Whirlwind remembered, (R)STP of those ten Switches kicks into the picture too.


I'm not an HPE Employee
Kudos and Accepted Solution banner
CFPiazza
Occasional Contributor

Re: LACP Design Questions

"if you can create a single virtual switch by front-/back-plane stacking Bonnie with Clyde if they support such feature (they slept together for sure! ;-) )...with those technologies you create and use link redudancy with more efficiency (front-/back-plane stacking doesn't require Distributed Trunking dt-lacp, it is sufficient a normal LACP Port Trunking against both Bonnie and Clyde since Bonnie and Clyde end to be seen as two separate logical entities, they are seen as a single big switch)."

 

How would I go about doing this?

 

parnassus
Honored Contributor

Re: LACP Design Questions

With HP ProCurve 2810 Switch series, I fear, you can't: AFAIK that series, considering the target usage it has/had, doesn't support any front-/back-plane stacking technology to deploy virtual switching, so neither frontplane stacking nor backplane stacking (you need, as example, far more recent Switch series like Aruba 2920 - backplane stacking - or Aruba 2930F - frontplane stacking (VSF Virtual Switching Framework) - running very latest software versions...and that is just to speak only about some fixed ports Switches' types).

Regarding the "ISC with dt-lacp" approach (which is, IMHO, the old way of miming the actual real virtual switching functionality, indeed it requires specifically that involved Switches support dt-lacp and not only the more common and standard lacp) I fear the same can be said (probably that is due more to a Switch Software's limitation than to a combined Software and Hardware limitation).

Pay attention that HP ProCurve 2810 Switch series supports "Management Stacking" feature: that feature isn't related to front-/back-plane stacking for Virtual Switching.


I'm not an HPE Employee
Kudos and Accepted Solution banner