Aruba & ProVision-based
1748268 Members
3654 Online
108760 Solutions
New Discussion юеВ

Distributed Trunking

 
nhstech
Collector

Distributed Trunking

Hello,

I'm new to these forums, but I've done a lot of research on this topic with mixed results. Nobody seems to have done exactly what I'd like to do so I wanted to post a concept here and see what feedback others have. We had our core switch go down a few weeks ago, taking down everything including our SAN and VMWare servers. Luckily we didn't lose any data, but it was a scary and stressfull situation! I'd like to add in some redundancy (using DT) to at least keep our SAN and servers online in the event of a switch outage. We have two VMWare hosts, and two mirrored SAN nodes in two different buildings on campus, connected via fiber. I'd like to add in two additional switches, one in each location for a total of 4 switches (all running K software). Here is my whiteboard sketch with key:

 

VMW 1&2: VMWare host servers (HP ProLiant)

SAN 1&2: SAN node (HP StoreVirtual)

1-4: HP ProCurve switches (one 5406zl, and three 3500yls)

DT Links: Distributed Trunks

ISC: Interswitch Connect Links

KA: Keepalive Links

 

Everything will have two links (trunked) to each device (as indicated by the two lines). Am I on the right track here? TIA!

2 REPLIES 2
Vince-Whirlwind
Honored Contributor

Re: Distributed Trunking

Looks fine.

 

The only query I would have relates to the link provisioning and traffic flows: if both VMWare environments are accessing SAN1 simultaneously, SAN1's links are potentially 2:1 oversubscribed for that traffic flow scenario.

 

And the only caution I'd advise relates to your switch selection: what you need are datacentre top-of-rack switches. The 3500s may be sufficient for your environment, but they are not necessarily engineered with high-volume data flows in mind. Sometimes you use the wrong switch for datacentre East-West traffic and you find problems because the switches just don't have the deep buffers necessary to deal with high-volume traffic. Just keep an eye on the buffer performance on the 3500s.

nhstech
Collector

Re: Distributed Trunking

Thank you for the reply. I probably should have mentioned that we are a pretty small environment (23 VMs) and the SAN and VMWare hosts are currently connected via a single 2 GB trunk using 2810 switches. The only time we even remotely come close to saturating a single gig link is during a nightly Veeam backup, so I think the 3500s will suffice. That and our current SAN nodes only have two NICs each, but that is soon to be upgraded to ones with at least 4. I was just having a hard time wrapping my head around connecting everything and the latest admin guides don't seem to have any pictures, and I like pictures! :)