HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Web and Unmanaged
cancel
Showing results for 
Search instead for 
Did you mean: 

Connecting more than four HPE 1950 switches

 
SOLVED
Go to solution
Highlighted
russell_blakely
Occasional Visitor

Connecting more than four HPE 1950 switches

Hi
I have one small office building with six of these switches. Looks like the maximum number that can be IRF stacked together is four – so there’s too many switches to stack.
I don’t want to buy the fiber modules.
What’s the optimal way to connect the switches so that each switch is connected via the 10Gb connections, and there’s redundancy for the other switches if one fails?
Should I connect them in a ring and enable MSTP?
Any other configuration changes I need to make, eg LLDP?
Thanks
3 REPLIES
parnassus
Honored Contributor

Re: Connecting more than four HPE 1950 switches

HPE OfficeConnect 1950 Switch series' IRF limit is four, so that's it.

You haven't been so clear about your desired network topology (an IRF Ring? OK for 4 Switches...and the others remaining two?) and your actual physical/geographical topology ("Small Office" means nothing if you're speaking about to use up to six 24 or 48 ports Switches all together...a lot of 1GBASE-T ports for a "Small Office").

Just speculating: since you don't want to purchase SFP+ Transceivers, your connectivity options shall be limited to BASE-T copper ports...so 1GBASE-T or 1/10GBASE-T physical ports (so no SFP+ at all)...now, with this prerequisite, it all depends on what method you want to use to deploy the IRF Fabric with four of your six Switches (IRF Ring topology? IRF Daisy Chain topology?)...almost all 1950s have 24/48 1GBASE-T + 2 10GBASE-T ports + 2 SFP+ slots [*]...so considering that you want to use only the 10GBASE-T ports type on each Switch for Switch to Switch IRF uplink(s)...it really depends on how many 10GBASE-T ports you are going to bond and IRF Topology you want to deploy (with the limited number of 10GBASE-T ports your Switches have and for an IRF Fabric made of three or four members...you are actually limited to respect the 1 physical 10GBASe-T port to 1 logical IRF port rule); the IRF Topology you're thinking of (Ring versus Daisy Chain) impacts with regard to the number of physical ports, used on each IRF Member Switch, necessary/recommended to form Logical IRF Port(s) (yes, it's a little bit "circular" A is related to B that is related to A)...that's to say that if you go with Ring you will use all 10GBASE-T physical ports on all IRF Members...if you go Daisy Chain, the first and the fourth Switch will remain with one 10GBASE-T port free for other uses (2nd and 3rd Switch will use both 10GBASE-T ports for IRF ports and they will have no 10GBASE-T port free for other uses).

It's clear that you can bond two 10GBASE-T ports to one single IRF Port (making the IRF Port with redundant links) only adopting IRF Daisy Chain Topology on a 2-Members IRF Fabric only (with these Switches).

With a 4-Members IRF Fabric, particularly with 1950, you're forced to have, as written above, 1 physical 10GBASE-T Port bound to 1 Logical IRF Port...not more. Other Switches, with more 10G ports, let more freedom in deploying IRF and in bonding physical ports to Logical IRF ports.

Extra-IRF Switches: 5th and 6th Switch are out of the picture (they will be not member of the IRF Fabric described above) and they should be connected to the IRF Fabric via LAGs (aggregated interfaces) if you can using, IRF side, 10GBASE-T remaining free ports (or SFP+ ports, but we now you don't want)...if any free port remains (so only if IRF is Daisy Chain and only using one 10GBASE-T port on 1st Switch and one 10GBASE-T Port on 4th Switch) WRT the IRF Topology discussed above...clearly this means that 5th and 6th Switches need to be physically sufficiently close to (or not too far from) the IRF Fabric discussed above...copper cables are copper cables.

[*] List:

  • HPE 1950 24G 2SFP+ 2XGT Switch (JG960A)
  • HPE 1950 48G 2SFP+ 2XGT Switch (JG961A)
  • HPE 1950 24G 2SFP+ 2XGT PoE+ Switch (JG962A)
  • HPE 1950 48G 2SFP+ 2XGT PoE+ Switch (JG963A)
  • HPE 1950 12XGT 4SFP+ Switch (JH295A)
russell_blakely
Occasional Visitor

Re: Connecting more than four HPE 1950 switches

Parnassus, thanks for thinking about this.

My current thinking is to do it as follows. Do you see any problems with this?

Let’s ignore for this exercise the limitations of copper cables.

Not use IRF stacking at all, because the number of switches exceeds the max of 4 for IRF stacking. So none of the ports would be bonded to IRF because we’re not using IRF.

Physically connect the switches as a ring, so –

Switch 1 10G port A to Switch 2 10G port B
Switch 2 10G port A to Switch 3 10G port B
Switch 3 10G port A to Switch 4 10G port B
Switch 4 10G port A to Switch 5 10G port B
Switch 5 10G port A to Switch 6 10G port B
Switch 6 10G port A to Switch 1 10G port B

There would be no port aggregation.

Enable MSTP to deal with the loop.
Enable LLDP.

Thank you

parnassus
Honored Contributor
Solution

Re: Connecting more than four HPE 1950 switches

Hello, it's a possible scenario (totally different with regard to deploying an IRF Fabric with up to four members...).

IMHO with that topology you're just physically looping your six Switches (and logically cutting the physical created loop with MSTP to prevent a logical loop to form)...but - although you're going to properly set (M)STP, STP priorities and LLDP for the whole group so loop connected - there is not, strictly speaking, a redundancy in doing so (I mean: against what exactly are you looking for redundancy?)...if a Switch in the loop just dies a STP recalculation will happen (and this is going to cause you a probable disruption, the magnitude of it is related to many different factors, STP priority and MSTP configurations overall) and that was a blocked uplink - the one you placed to close the loop - should be released (re-enabled) to grant East-West traffic through the chain to flow again (clearly excluding died Switch).

What all will mean for your access hosts (those distributed to all your six Switches)?