- Community Home
- >
- Networking
- >
- Legacy
- >
- Switches, Hubs, Modems
- >
- Distributed trunking
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-23-2010 03:53 AM
тАО11-23-2010 03:53 AM
http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-June2009-59923059-12-PortTrunk.pdf
shows configuration of distributed trunking on Procurve switches but it says nothing about configuration on the server side.
What protocol do I need to use if I connect:
- a Linux server with NICs "bonding"
- a Windows server with Intel or Broadcom cards
- an ESX host as described here:
http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/
- an EMC Celerra NAS
Always 802.3ad (LACP)? What Linux bonding mode is it?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-23-2010 07:23 AM
тАО11-23-2010 07:23 AM
Re: Distributed trunking
there is a newer/better document:
http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-Mar10-12-PortTrunk.pdf
on the endings servers you MUST use dynamic LACP (yes 802.3ad) that means bonding mode 4 on linux world and "802.3ad Dynamic with Fault Tolerance" for HP teaming.
Static LACP is required between switches as per documentation.
Best regards,
antonio
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-23-2010 07:26 AM
тАО11-23-2010 07:26 AM
Re: Distributed trunking
http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/
says that vSphere works with Procurve Distributed Trunking. Any reference other than this blog?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-23-2010 12:09 PM
тАО11-23-2010 12:09 PM
Solutioni've read your posted link and agree with the comments.
ESX doesn't support LACP in "native" (d)vSwitches(you need Nexus 1000v), however in ESX you can still combine multiple pNICs as teaming groups..if you use and active/active group using "ip hash" as policy you can effectively form a "raw etherchannel" where all pNICs are in forwarding state since ESX is inspecting ip flow and "pinning" MAC address to distribute traffic on all pNICs:
ESX vSW can't neither exchange nor understant LACP pdus and if you want to form an active/active teaming you MUST force the physical switch to "channel" its interfaces:
on procurve jargon you have to create a static trunk (stricly speaking static LACP doesn't exist =)
So yes using dt-lacp and a ESX teaming with "ip hash" and link fail detection can work (no beaconing!!)...
but IMHO is violating my KISS mantra plus it's not worthing the effort especially when your bandwidth hungry apps are storage oriented and you have native MPIO solutions.
Hope this clarifies last post of mine.
Regards,
Antonio
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2010 03:16 AM
тАО11-24-2010 03:16 AM
Re: Distributed trunking
I understand what you say, thanks.
I'm evaualating pro's and con's of all methods to "attach" storage to VMware (and not only to it...):
If possible, I use FC that works at its best without many efforts of configuration.
As an alternative to FC, I'm looking at iSCSI (with MPIO as failover/load-sharing option) and NFS (with network level solutions for failover/load-sharing).
I'm trying to avoid any prejudiced position and reading blogs of guru like:
http://virtualgeek.typepad.com/