HPE Aruba Networking & ProVision-based
1828366 Members
2847 Online
109976 Solutions
New Discussion

Consider implementing ISC or VSF

 
SOLVED
Go to solution
Dennis_HP-ATP
Occasional Advisor

Consider implementing ISC or VSF

Hi guys,

 

Currently, we have a collapsed core (2 x 5406 J9850A) and we are considering to make 1 virtual switch as campus core switch.

The access switches (Aruba 2920 24G PoE - J9727A) are already backplane stacked and we have at our HQ 27 access layer stacks.

Does anyone have some experience for implementing either ISC or VSF?

With this in prod, we can bypass STP and can perform hitless patch upgrades, OSPF graceful restart and persistent virtual mac addresses (perhaps possible with VSF?).

 

As a sidenote:

1. We have 27 access layer stacks, consisting of 2, 3 or 4 members.

2. We are upgrading every stack to a 2 x 10G fiber uplink to the collapsed campus core

3. Campus core switches are currently running KB.15.17.0008 (recommended software version is much appreciated)

4. What is your opinion about scalability, performance and CPU when the core has 27 x 10G uplink from the access layer to handle?

Would it be possible, that this might result in a constant high CPU and is the chassis capable of handling 27 x 10G uplinks? We are also getting rid of telnet and implementing SSHv2, which will also increase the cpu a bit.

Any field experience would be very useful. :-)

 

Show modules on the 5406 campus core consists of:

   MM1 - J9827A Man Module 5400Rzl2

    MM1 - J9827A Man Module 5400Rzl2 

 

Show mod on Slot A en B

HP J9993a 8p 1G/10GbE SFP+ v3 zl2 Mod

We are considering to go to a situation that slots A, B, C and 3 ports on D are all J9993a 8p 1G/10GbE SFP+ v3 zl2 Modules, each having a 10G downlink, in order to provision 27 stacks in total.

 

Many thanks in advance for your "best practices" advise.

 

Rgds,

Dennis

3 REPLIES 3
parnassus
Honored Contributor
Solution

Re: Consider implementing ISC or VSF

The very first thing that come to mind reading about your scenario is to be cautious with ports (over)subscription (especially for SFP+ 10Gbps uplinks)...so a correct planning of which is connected to what port could be important enough...oversubscription is a subject discussed widely so you should find info about it easily (the latest discussion I jumped in which oversubscription was briefly discussed was this).

If I understood you correctly each Aruba 2920 Switches stack (2, 3 or 4 members) will be equipped with 2x10Gbps fiber optic uplink to core switches (VSF Stack)...so, basically, you will distribute those each physical link on the VSF Stack...a 10Gbps link to the first VSF member, the other 10Gbps link to the second one.

Then a second tought: VSF requires (and admits) just one VSF Link, that VSF Link supports (aggregated together) up to 8 VSF 10G (or 40G) physical ports...so (a) you need to care about that in terms of ports provisioning and (b) you need to think about 10G ports grouping (probably using ports belonging to a same port group only for VSF role or only for Uplinks role, but - I think - not mixing port assigned to different roles together). I would also add that (c) you should distribute VSF ports on differents Modules (that's to avoid that a single Module carries all the VSF inter-switches traffic acting as a SPoF)...clearly to evaluate how many VSF ports are necessary for the VSF Link an idea of the South<-->North and West<-->East traffic amounts is necessary (considering also that you have - probably - Servers connected to the VSF Stack too).

Forget about any (if already installed) 2nd MM on each 5400R zl2: read, just as example, here. Naturally official HPE/Aruba switch related documentation (and also VSF Configuration Guide) was clear about that.

You must use - at least - a KB.16.01.xxxx software version...but I advise you to switch directly into KB.16.03 or KB.16.02 branches, especially if your switches are - as of today - still running on KB.15.17 branch.

Recently VSF Fast Software Upgrade feature was added with the release of KB.16.03: read about it here. Again, HPE/Aruba official documentation speaks about that new feature too.

A mention to the necessitiy to implement an external MAD Device (to manage split brain events) - or use OoBM to do that - and the requirement to use only v3 zl Modules closes my initial thoughts.

Well...that's just to scratch the surface (I agree a discussion about backplane performances considering all those SFP+ links will be interesting).


I'm not an HPE Employee
Kudos and Accepted Solution banner
Dennis_HP-ATP
Occasional Advisor

Re: Consider implementing ISC or VSF

Hi Parnassus,

Many thanks for your reply.

I will follow your advise, read the Community articles and take all matters (traffic flows and oversubscription) into consideration, before making a decision.

 

Rgds,

Dennis

parnassus
Honored Contributor

Re: Consider implementing ISC or VSF

Hi Dennis, thanks you.

Forgot to add this link (it's from Aruba AirHeads Community) regarding the OoBM-MAD deployment on Aruba 5400R zl2.

Then, well, as you understood the "Chassis Redudancy" concept available on Aruba 5400R zl2 switch series (exactly when MM1 and MM2 are both installed and enabled on a single Aruba 5400R zl2 switch) - deployed through the usage of "Nonstop Switching mode" or through the "Warm Standby mode" configuration variants - is mutually exclusive with respect to VSF deployment...so if you plan to use "Chassis Redudancy" on each 5400R zl2 then that is a VSF stopper (or vice-versa).

Probably there are many other things that can be said (Example: maximum MAC/ARP entries manageable by the VSF stack, 64k/25k for up to 24k access devices...or the fact that both Aruba 5400R zl2 must work with v3 zl2 Modules - so also forget about still using v2 zl Modules - only and must be configured to run in "V3 only mode" too) about VSF deployment...a latest bit: an interesting video (a must see IMHO) to watch is, as example, the ATM16 Take a Walk On The Wired Side (speakers: Justin Noonan and Ruben Iglesias).


I'm not an HPE Employee
Kudos and Accepted Solution banner