<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Distributed trunking in Switches, Hubs, and Modems</title>
    <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716854#M24592</link>
    <description>Hello Domenico,&lt;BR /&gt;&lt;BR /&gt;i've read your posted link and agree with the comments.&lt;BR /&gt;ESX doesn't support LACP in "native" (d)vSwitches(you need Nexus 1000v), however in ESX you can still combine multiple pNICs as teaming groups..if you use and active/active group using "ip hash" as policy you can effectively form a "raw etherchannel" where all pNICs are in forwarding state since ESX is inspecting ip flow and "pinning" MAC address to distribute traffic on all pNICs:&lt;BR /&gt;&lt;BR /&gt;ESX vSW can't neither exchange nor understant LACP pdus and if you want to form an active/active teaming you MUST force the physical switch to "channel" its interfaces:&lt;BR /&gt;on procurve jargon you have to create a static trunk (stricly speaking static LACP doesn't exist =)&lt;BR /&gt;&lt;BR /&gt;So yes using dt-lacp and a ESX teaming with "ip hash" and link fail detection can work (no beaconing!!)...&lt;BR /&gt;but IMHO is violating my KISS mantra plus it's not worthing the effort especially when your bandwidth hungry apps are storage oriented and you have native MPIO solutions.&lt;BR /&gt;Hope this clarifies last post of mine.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Antonio&lt;BR /&gt;</description>
    <pubDate>Tue, 23 Nov 2010 20:09:09 GMT</pubDate>
    <dc:creator>Antonio Milanese</dc:creator>
    <dc:date>2010-11-23T20:09:09Z</dc:date>
    <item>
      <title>Distributed trunking</title>
      <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716851#M24589</link>
      <description>The doc:&lt;BR /&gt;&lt;A href="http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-June2009-59923059-12-PortTrunk.pdf" target="_blank"&gt;http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-June2009-59923059-12-PortTrunk.pdf&lt;/A&gt;&lt;BR /&gt;shows configuration of distributed trunking on Procurve switches but it says nothing about configuration on the server side.&lt;BR /&gt;What protocol do I need to use if I connect:&lt;BR /&gt;- a Linux server with NICs "bonding"&lt;BR /&gt;- a Windows server with Intel or Broadcom cards&lt;BR /&gt;- an ESX host as described here:&lt;BR /&gt;&lt;A href="http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/" target="_blank"&gt;http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/&lt;/A&gt;&lt;BR /&gt;- an EMC Celerra NAS&lt;BR /&gt;Always 802.3ad (LACP)? What Linux bonding mode is it?</description>
      <pubDate>Tue, 23 Nov 2010 11:53:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716851#M24589</guid>
      <dc:creator>Domenico Viggiani</dc:creator>
      <dc:date>2010-11-23T11:53:54Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed trunking</title>
      <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716852#M24590</link>
      <description>Hi Domenico,&lt;BR /&gt;&lt;BR /&gt;there is a newer/better document:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-Mar10-12-PortTrunk.pdf" target="_blank"&gt;http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-Mar10-12-PortTrunk.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;on the endings servers you MUST use dynamic LACP (yes 802.3ad) that means bonding mode 4 on linux world and "802.3ad Dynamic with Fault Tolerance" for HP teaming.&lt;BR /&gt;&lt;BR /&gt;Static LACP is required between switches as per documentation.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;&lt;BR /&gt;antonio&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Nov 2010 15:23:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716852#M24590</guid>
      <dc:creator>Antonio Milanese</dc:creator>
      <dc:date>2010-11-23T15:23:27Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed trunking</title>
      <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716853#M24591</link>
      <description>Dynamic LACP is not supported on VMware but here:&lt;BR /&gt;&lt;A href="http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/" target="_blank"&gt;http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/&lt;/A&gt;&lt;BR /&gt;says that vSphere works with Procurve Distributed Trunking. Any reference other than this blog?</description>
      <pubDate>Tue, 23 Nov 2010 15:26:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716853#M24591</guid>
      <dc:creator>Domenico Viggiani</dc:creator>
      <dc:date>2010-11-23T15:26:48Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed trunking</title>
      <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716854#M24592</link>
      <description>Hello Domenico,&lt;BR /&gt;&lt;BR /&gt;i've read your posted link and agree with the comments.&lt;BR /&gt;ESX doesn't support LACP in "native" (d)vSwitches(you need Nexus 1000v), however in ESX you can still combine multiple pNICs as teaming groups..if you use and active/active group using "ip hash" as policy you can effectively form a "raw etherchannel" where all pNICs are in forwarding state since ESX is inspecting ip flow and "pinning" MAC address to distribute traffic on all pNICs:&lt;BR /&gt;&lt;BR /&gt;ESX vSW can't neither exchange nor understant LACP pdus and if you want to form an active/active teaming you MUST force the physical switch to "channel" its interfaces:&lt;BR /&gt;on procurve jargon you have to create a static trunk (stricly speaking static LACP doesn't exist =)&lt;BR /&gt;&lt;BR /&gt;So yes using dt-lacp and a ESX teaming with "ip hash" and link fail detection can work (no beaconing!!)...&lt;BR /&gt;but IMHO is violating my KISS mantra plus it's not worthing the effort especially when your bandwidth hungry apps are storage oriented and you have native MPIO solutions.&lt;BR /&gt;Hope this clarifies last post of mine.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Antonio&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Nov 2010 20:09:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716854#M24592</guid>
      <dc:creator>Antonio Milanese</dc:creator>
      <dc:date>2010-11-23T20:09:09Z</dc:date>
    </item>
    <item>
      <title>Re: Distributed trunking</title>
      <link>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716855#M24593</link>
      <description>Antonio,&lt;BR /&gt;I understand what you say, thanks.&lt;BR /&gt;I'm evaualating pro's and con's of all methods to "attach" storage to VMware (and not only to it...):&lt;BR /&gt;If possible, I use FC that works at its best without many efforts of configuration.&lt;BR /&gt;As an alternative to FC, I'm looking at iSCSI (with MPIO as failover/load-sharing option) and NFS (with network level solutions for failover/load-sharing).&lt;BR /&gt;I'm trying to avoid any prejudiced position and reading blogs of guru like:&lt;BR /&gt; &lt;A href="http://virtualgeek.typepad.com/" target="_blank"&gt;http://virtualgeek.typepad.com/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Nov 2010 11:16:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/switches-hubs-and-modems/distributed-trunking/m-p/4716855#M24593</guid>
      <dc:creator>Domenico Viggiani</dc:creator>
      <dc:date>2010-11-24T11:16:28Z</dc:date>
    </item>
  </channel>
</rss>

