Switches, Hubs, and Modems
1754858 Members
5248 Online
108827 Solutions
New Discussion

Re: Trunking on 5412zl

 
SOLVED
Go to solution
kghammond
Frequent Advisor

Re: Trunking on 5412zl

No we have not. For now, we have switched vSphere back to the default teaming setting basically ALB.

We tried to get LACP working on vSphere with dt-lacp on the HP switches but we kept having intermitent network issues.

We also just had a major network outage, one of our ProCurve switches rebooted and the 10 Gbe dt trunk between the two flaked out on reboot. We ended up removing and re-inserting the 10Gb blade to get the link back up, but by then we switched back to a slew of 1 Gb links for our trunk. We aren't sure if all the distributed trunking caused the switch to reboot or if it was the 10Gb module.

We are still on different firmware versions, so that might be our issue... We are working to get all systems redundant on both switches so we can upgrade firmware more easily.

One downside to iSCSI and vSphere, 95% of our enviornment is solely dependant on our ProCurve switches.

Based on our experiences so far, unless we see some tested known working configurations, we may wait on HP/VMware to evolve more before we try dt-lacp again.
Mads Sørensen
New Member

Re: Trunking on 5412zl

Okay, to bad, I was hoping to get the golden path from you :-)

We also have 2 x HP ProCurve 6600, but here we are only using LACP to the NetApp storage.

But our counters are showing that we are not even close to being using 10GBe, so we will try to make a smaller environment with just 1 GB connections.

But when using NFS, it's not possible to trunk ports and also get redundant switch connections, unless we use dt-lacp. And we want to trunk, because we want more than 1GB connection, we would like 2x2GB (4x1GB), so we have 2 NFS vmks = 2 datastores which each can perform 1.5-2 GB.

That we are not able to receive with VMware's port lb, because one vmk = one port, and therefore it will never exceed more than 1 GB. :(

/Mads