Switches, Hubs, and Modems
cancel
Showing results for 
Search instead for 
Did you mean: 

Trunking on 5412zl

SOLVED
Go to solution

Trunking on 5412zl

I have a 5412zl procurve. We upgraded our VMWare infrastructure to vSphere 4 and i'm wondering in the case of dvSwitches...

I have 3 servers with 8 NICS and I want to trunk them but want to split them up across different modules in the 5412. Is it possible to have one trunk group across different modules so that I have some redundancy for my ESX hosts?

I have a rough drawing of what I want to accomplish while maintaining all connectivity to my NICs on each ESX host. The diagram only shows 2 NICs but each ESX host actually has 8 NICs.

IS this possible on this switch?

11 REPLIES

Re: Trunking on 5412zl

For some reason the diagram never got attached. My apologies.
EckerA
Respected Contributor
Solution

Re: Trunking on 5412zl

Hi,
you can trunk across different modules.
exp:
trunk a1,b1 trk1 trunk
works just fine.

hth

Alex

Re: Trunking on 5412zl

ok. thanks.

Next is the 2nd part. In our DR site we are going to have 2 24 port switches which I have not purchased yet. Is there some way with procurve switches to trunk across 2 switches like with Nortel's split multi-link trunking?

I would rather not go with mstp since in that scenario I only get half my bandwidth and then fail-over will give me the other half. I would like to get full use of all ports unless one of the switches tanks.

Thanks for the help.

Adam

Re: Trunking on 5412zl

ok, I think I may have answered my own question in browsing through the forums but I just want to verify here. The only way to do split multi-link type trunking is to use the 3500yl with VRRP.

No other way than that or using a chassis based switch in my DR site?

Thanks,
Adam
serpel
Trusted Contributor

Re: Trunking on 5412zl

Hi,
i guess what you want is dt-trunking. works with the 3500, 5400, 8200 and 6600 i think.
and you need software k.14.xx.
the disadvantage is you can only do dt-trunking or routing.

hth
alex
Trevor Commulynx
Regular Advisor

Re: Trunking on 5412zl

dont forget though, you can enable IP routing on switches running DT (LACP - Distributed Trunking). no VRRP that means.

Trev.
Trevor Commulynx
Regular Advisor

Re: Trunking on 5412zl

Sorry, I meant CANT enable Routing.
kghammond
Frequent Advisor

Re: Trunking on 5412zl

Hello,

We have been trying to get distributed trunking working with vSphere and our HP switches.

We have the dt-lacp ports setup and we have the vswitches set to IP Hash.

On the HP switches, the ports don't show that there is a lacp member. The HP switches reported the same MAC on both ports (different switches), which we think is correct.

Also when we did this configuration and we tried to bring a vSphere box online, we received an error that HA could not be enabled.

ProCurve Switch 1(config)# trunk 23-24 trk1 lacp
ProCurve Switch 1(config)# switch-interconnect trk1
ProCurve Switch 1(config)# trunk 1 trk5 dt-lacp
ProCurve Switch 2(config)# trunk 23-24 trk1 lacp
ProCurve Switch 2(config)# switch-interconnect trk1
ProCurve Switch 2(config)# trunk 1 trk5 dt-lacp

We are trying the same thing as here:
http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/

I am not sure if we are doing something wrong, or if this simply does not work the way we expect it or what...

Thank you,
Kevin
Mads Sørensen
Occasional Visitor

Re: Trunking on 5412zl

Hi Kevin

Did you solved your problem?

We are about to try the same with dt-lacp and vSphere. So it would be nice to know if we have to be aware of something special?

Thanks

/Mads
kghammond
Frequent Advisor

Re: Trunking on 5412zl

No we have not. For now, we have switched vSphere back to the default teaming setting basically ALB.

We tried to get LACP working on vSphere with dt-lacp on the HP switches but we kept having intermitent network issues.

We also just had a major network outage, one of our ProCurve switches rebooted and the 10 Gbe dt trunk between the two flaked out on reboot. We ended up removing and re-inserting the 10Gb blade to get the link back up, but by then we switched back to a slew of 1 Gb links for our trunk. We aren't sure if all the distributed trunking caused the switch to reboot or if it was the 10Gb module.

We are still on different firmware versions, so that might be our issue... We are working to get all systems redundant on both switches so we can upgrade firmware more easily.

One downside to iSCSI and vSphere, 95% of our enviornment is solely dependant on our ProCurve switches.

Based on our experiences so far, unless we see some tested known working configurations, we may wait on HP/VMware to evolve more before we try dt-lacp again.
Mads Sørensen
Occasional Visitor

Re: Trunking on 5412zl

Okay, to bad, I was hoping to get the golden path from you :-)

We also have 2 x HP ProCurve 6600, but here we are only using LACP to the NetApp storage.

But our counters are showing that we are not even close to being using 10GBe, so we will try to make a smaller environment with just 1 GB connections.

But when using NFS, it's not possible to trunk ports and also get redundant switch connections, unless we use dt-lacp. And we want to trunk, because we want more than 1GB connection, we would like 2x2GB (4x1GB), so we have 2 NFS vmks = 2 datastores which each can perform 1.5-2 GB.

That we are not able to receive with VMware's port lb, because one vmk = one port, and therefore it will never exceed more than 1 GB. :(

/Mads