Switches, Hubs, and Modems

Re: Distributed trunking

Go to solution
Domenico Viggiani
Super Advisor

Distributed trunking

The doc:
shows configuration of distributed trunking on Procurve switches but it says nothing about configuration on the server side.
What protocol do I need to use if I connect:
- a Linux server with NICs "bonding"
- a Windows server with Intel or Broadcom cards
- an ESX host as described here:
- an EMC Celerra NAS
Always 802.3ad (LACP)? What Linux bonding mode is it?
Antonio Milanese
Trusted Contributor

Re: Distributed trunking

Hi Domenico,

there is a newer/better document:


on the endings servers you MUST use dynamic LACP (yes 802.3ad) that means bonding mode 4 on linux world and "802.3ad Dynamic with Fault Tolerance" for HP teaming.

Static LACP is required between switches as per documentation.

Best regards,


Domenico Viggiani
Super Advisor

Re: Distributed trunking

Dynamic LACP is not supported on VMware but here:
says that vSphere works with Procurve Distributed Trunking. Any reference other than this blog?
Antonio Milanese
Trusted Contributor

Re: Distributed trunking

Hello Domenico,

i've read your posted link and agree with the comments.
ESX doesn't support LACP in "native" (d)vSwitches(you need Nexus 1000v), however in ESX you can still combine multiple pNICs as teaming groups..if you use and active/active group using "ip hash" as policy you can effectively form a "raw etherchannel" where all pNICs are in forwarding state since ESX is inspecting ip flow and "pinning" MAC address to distribute traffic on all pNICs:

ESX vSW can't neither exchange nor understant LACP pdus and if you want to form an active/active teaming you MUST force the physical switch to "channel" its interfaces:
on procurve jargon you have to create a static trunk (stricly speaking static LACP doesn't exist =)

So yes using dt-lacp and a ESX teaming with "ip hash" and link fail detection can work (no beaconing!!)...
but IMHO is violating my KISS mantra plus it's not worthing the effort especially when your bandwidth hungry apps are storage oriented and you have native MPIO solutions.
Hope this clarifies last post of mine.


Domenico Viggiani
Super Advisor

Re: Distributed trunking

I understand what you say, thanks.
I'm evaualating pro's and con's of all methods to "attach" storage to VMware (and not only to it...):
If possible, I use FC that works at its best without many efforts of configuration.
As an alternative to FC, I'm looking at iSCSI (with MPIO as failover/load-sharing option) and NFS (with network level solutions for failover/load-sharing).
I'm trying to avoid any prejudiced position and reading blogs of guru like: