1753522 Members
11024 Online
108795 Solutions
New Discussion

IRF uplinks

 
James-W
Occasional Advisor

IRF uplinks

I have a query setting up MAD/IRF design. We have a solution of 9 x 5500 EI switches at the Access Layer (IRF) connecting back to 2 x 5500 HI Core switches (IRF). The connection is going to a 2 x 1Gb BAG between the AL & Core, my question is do you need MAD configured at the AL? If so, how many connections do you require, as the documentation shows 1 link back from each AL switch using LACP.

 

Thanks,

James

7 REPLIES 7
paulgear
Esteemed Contributor

Re: IRF uplinks

Hi James-W,

In general you should set up a separate MAD BFD configuration for each IRF stack, so yes, i would definitely set up MAD on the access switches. The normal configuration for 5500 series is that IRF uses the 10 GbE ports (if i remember correctly, this is the only supported IRF connection method), and MAD BFD uses 1 GbE ports.

You should configure a complete loop for MAD BFD just like you do for IRF - i.e. two ports per 5500 switch connected in a loop via access ports in a dedicated VLAN used only for MAD. (It's probably safe to use the same VLAN for multiple IRF stacks, as long as you use a separate IP address for each switch, but i haven't tried this.) The configuration guide has more info about this.
Regards,
Paul
paulgear
Esteemed Contributor

Re: IRF uplinks

Hi James-W,

I forgot to address the LACP question: ideally you would have at least one port from each access switch into your core, but you should be able to use more or less depending on your performance requirements. (There will be a limit on the number of ports per LACP trunk, though, so be sure to find out what that is in the manuals for both switch models.)

I'm not sure that this is really an optimum architecture for your access layer to core setup. If you have a 5500 EI stack for access layer and 2 x 1 GbE to your core, then you actually have better bandwidth between the switches in your access layer (east-west bandwidth) than you do from your access layer to your core (north-south bandwidth), which may not be what you want.

If you need more north-south bandwidth, You might want to consider getting additional 10 GbE cards and using the minimum for IRF stacking, and using the rest as an LACP trunk between your access layer and core.
Regards,
Paul
James-W
Occasional Advisor

Re: IRF uplinks

HI Paul,

Thanks for your answers.

Just for piece of mind, if l have 9 switches in IRF using the 10Gb DAC cables to create the IRF stack, l need an additional 18 x 1Gb RJ45 ports to create the BFD MAD set-up?

Having this set-up at the edge removes almost .5% of the switch ports at every AL cabinet.

Regards,
James
paulgear
Esteemed Contributor

Re: IRF uplinks

Hi James,

 

It does seem a bit of overkill that you need so many ports for it.  You'd have to have a read of the config guide to find out whether this is a supported configuration, but one option might be to run the MAD BFD in a VLAN over your uplink trunks.  The whole point of MAD is to ensure that the switches can determine which switches are active if the IRF stack is broken, and the uplinks should be sufficient for this.  But i've not done it, so don't take my word for it.  :-)

 

 

I again wonder about the value in having a stack for your access layer.  Are they for server connections, or desktops?  If for desktops, i'd be inclined to run them standalone, with a 2 x 1 Gbps LACP trunk to your core.

Regards,
Paul
James-W
Occasional Advisor

Re: IRF uplinks

Hi Paul,

They are only going to be for desktops, WAPs, printers etc...

The reason l was looking at this was purely to have they managed as a single switch stack (9 switches) similar to the Cisco 3750 management of the switch, but hopefully with out losing the port count.

Can you create this as a single stack, but with out having to run MAD (i,e stacking, but not IRF?)

Regards,
James
Peter_Debruyne
Honored Contributor

Re: IRF uplinks

Hi,

 

The whole MAD concept is their to assist in case of a split brain, which means the stacking links are down, so there would be 2 switches on the network with the same MAC address.

MAD is just the detection mechanism of this situation, but it cannot fix your broken stack links.

Every vendor with stacking can face this scenario, but not every vendor will give you an option to detect and take action on this scenario.

 

To resolve the twin switch scenario, MAD will remove 1 of the 2 by disabling (shutdown) the ports on 1 of the 2 sides (side for which the master has the lowest stack member ID wins, other side will shutdown ports).

 

This would mean in case of your 9 switches, that in case you have unit 1,2,3 running OK (1 as master) , but there is a split so units 4,5,6,7,8,9 are isolated (with e.g. unit 4 as new master), that this second stack would shutdown all its ports.

 

When you do not configure MAD, you are accepting that the 2 new logical switches will stay online with the same MAC, causing all kinds of tricky network situations.

 

For the 5500, the default behavior when the master changes, is that after 6 minutes, the stack will assign itself a new MAC address, so then the 2 logical switches will appear as unique devices again (no config needed for this).

 

Altough this seems like a fix, it is not: you typically have some link-agg with LACP configured from the edge stack to the Core. Assume you have 2 uplinks from the stack, from member1 to the core and from member 5 to the core.

 

When all is fine, there is 1 logical switch (of 9 physical units), which operates with the same MAC, so the Core will see the same LACP ID on the link-agg to the edge.

When there is a split stack (assume 1/2/3 vs 4/5/6/7/8/9), for 6 minutes each of the 2 new logical edge switches is still using the same original MAC, so the Core will see no change. Realize however that the core will "loadbalance" traffic over the 2 links, so it could send data intended for a client which is connected to unit1 over the link to unit 5. Since the stack is broken, unit5 cannot reach unit 1 anymore, so the packet is dropped. In other words, in these 6 minutes, you have to be "lucky" for your traffic to get to the correct host.

 

After 6 minutes, the situation changes: since the logical switch with 4/5/6/7/8/9 will assign itself a new MAC, this new MAC will be reported over LACP.

The core will now see that there are different LACP-IDs (based on the MAC) on the link-agg, and it will block (in software) all in/out traffic of 1 of the links, ensuring the network is stable again.

The link to the switch with the lowest LACP-ID will be preserved, so it is unpredictable which one will remain connected.

The net result is that e.g. users on 1/2/3 will remain online, while users on 4/5/6/7/8/9 will have a physical link on their pc, but the uplink is blocked, so they have no access to the core.

 

I sometimes refer to this as implicit MAD, since it has the same result as MAD, without configuring MAD.

 

This config can be verified with dis irf (mac persistent option).

 

So with or without MAD : a broken stack cannot be automatically fixed !

 

Hope this helps understanding the situation,

Best regards,Peter.

 

 

 

 

 

Bjorn Lagace
Advisor

Re: IRF uplinks

Hi,

I know this is an old thread, but I have somewhere almost simular situation.

I was thinking of the following :

Say you have an irf stack consisting out of 4 members. (1-2-3-4)

You uplink member 1 to your core with a 10Gbe and member 4 with a 1Gbe, no LACP involved.
Then STP would block then the 1Gbe link and uplink the stack thru the 10Gbe.

We don't apply any MAD configuration
When a split brain occurs (no matter which position), the remaining stack consisting out of member 1+x stays online, the member(s) that got split off are offline since the 1Gbe link stays blocked due to same mac address.

When the IRF stacks changes his MAC after x minutes and forms two seperate stacks with different mac's, the 1Gbe link comes online and you're back operational.  So you face a downtime that's equal of the time that's required for the IRF's to generate new mac.

Does this sound/stand correct, or did I miss the bus somewhere?

All critics are welcome!

Bjorn Lagace