Switches, Hubs, and Modems
Showing results for 
Search instead for 
Did you mean: 

Procurve redundancy

Occasional Advisor

Procurve redundancy

I have 2 x 2810-24G and 3 x 2650-48 Procurve switches, 2 Proliant DL360 (DC’s) and 3 x Proliant DL380’s (cluster).

What would be the best configuration for redundancy? My initial thoughts are as follows:

Enable Spanning Tree.
Team the NICs on the servers and connect each card to one of the 2810s so each server is connected to both 2810s.
Connect the 1G uplink ports on the 2650s to the 2810 so each 2650 is connected to both 2810s. PCs and printers would connect to the 2650s.

Do I need to connect the two 2810s together?
Should I trunk the connections to the servers and the switches and not spread them over both 2810s?
Do I need to do anything with the Spanning Tree priority I have read about?

Any advice would be welcome.

Honored Contributor

Re: Procurve redundancy

best configuration for redundancy on procurve you can use provision asic switch (3500-5400)becuse provision asic switch cover vrrp (virtual router redundancy protocol)so this system L3 redundancy and same time with mstp or stp L2 redundancy

2810 only L2 switch (unable routing )and unable vrrp config
therefore only L2 redundancy on network with rstp or stp

please see atttach folder for my advice deploy your switch and enable rstp on network


Honored Contributor

Re: Procurve redundancy


Your thought is great and it should work perfectly, but as Cenk said, based on L2.

- Connect both 2810's (better TRUNK of at least 2 ports).
- Trunk should be terminated in the same switch, so just Team the NICs and connect to both 2810's.
- Spanning-Tree priority here is important, so on first 2810 run:
Sw(config)# span priority 0 "root"

and on the Second run:
Sw(config)# span priority 1 "backup"

Both 2810 and 2600 support Multiple spanning tree, so you can create at least 2 Vlans, one for PCs and one for Servers, and you can create 2 Instances each carries one Vlan.

And in with this scenario you will get
Redundancy and Load Balancing also

More Info.:

Good Luck !!!
Science for Everyone