<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: nic bonding &amp;amp; lacp - Linux in BladeSystem - General</title>
    <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6230945#M19274</link>
    <description>&lt;P&gt;I was talking with an HP sales engineer, and came to the same conclusion.&amp;nbsp; I have on hand four 1/10Gb-F switches.&amp;nbsp; So, the plan was to replace the GbE2c's with those.&amp;nbsp; Each blade has the two on-board NICs and a mezzanine NIC.&amp;nbsp; He was telling me I can set that up with two bonds on the blades and an LACP group from each switch to the Cisco and that should work in an active-active state.&amp;nbsp; So, that is the plan.&amp;nbsp; I need to upgrade the firmware on the 1/10Gb-F switches and then schedule some downtime to replace the GbE2C's and run through the Flex setup.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the reply!&amp;nbsp; :)&lt;/P&gt;</description>
    <pubDate>Wed, 09 Oct 2013 12:20:49 GMT</pubDate>
    <dc:creator>BradV</dc:creator>
    <dc:date>2013-10-09T12:20:49Z</dc:date>
    <item>
      <title>nic bonding &amp; lacp - Linux</title>
      <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6224361#M19263</link>
      <description>&lt;P&gt;I have some BL460c's with GbE2c switches in IC bays 1 &amp;amp; 2.&amp;nbsp; I'm using two blades as test systems.&amp;nbsp; I have the bond setup alright (RHEL 6) and it is working.&amp;nbsp; The cross links between the two GbE2c's are disabled.&amp;nbsp; I have a single Cisco 3750 as the uplink from both GbE2c's.&amp;nbsp; My question is about the lacp.&amp;nbsp; Do I need to:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1.&amp;nbsp; Enable the cross-links&lt;/P&gt;&lt;P&gt;2.&amp;nbsp; Link bond the internal ports on the GbE2c's?&amp;nbsp; That is for a server in bay 5, create a link bond between the port 5 of the GbE2c in IC bay 1 and port 5 of the GbE2c in IC bay 2?&lt;/P&gt;&lt;P&gt;3.&amp;nbsp; I know I need to link the external uplink ports and I'm pretty sure this also needs the cross-links enabled?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Wed, 02 Oct 2013 12:58:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6224361#M19263</guid>
      <dc:creator>BradV</dc:creator>
      <dc:date>2013-10-02T12:58:10Z</dc:date>
    </item>
    <item>
      <title>Re: nic bonding &amp; lacp - Linux</title>
      <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6230529#M19271</link>
      <description>LACP requires both ends of the cables connect to the same device.&lt;BR /&gt;This would be the NIC on the Blade and the GbE2c on the enclosure&lt;BR /&gt;OR&lt;BR /&gt;The GbE2c on the enclosure and the Cisco 3750&lt;BR /&gt;&lt;BR /&gt;The problem is, you have 2 GbE2c switches and as far as I am aware, there is no way to merge them into 1 logical switch like there is with vPC (Nexus), VSS (Catalyst), IRF (HP 3COM), Virtual Chassis (Juniper), etc etc.&lt;BR /&gt;&lt;BR /&gt;The cross links enable east/west traffic but again, I don't think they can be used to merge the 2 switches into one. I could be wrong here though as I don't have a ton of hands on experience with those Switches.&lt;BR /&gt;&lt;BR /&gt;Now if you wanted to connect multiple 1Gb connections from a single GbE2c to a single 3750, that should work just fine.&lt;BR /&gt;And then do it again on the other GbE2c.&lt;BR /&gt;&lt;BR /&gt;At the Server level, you would want to use either Failover or TLB (Transmit Load Balancing) Bonding modes in Linux.</description>
      <pubDate>Wed, 09 Oct 2013 02:30:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6230529#M19271</guid>
      <dc:creator>Casper42</dc:creator>
      <dc:date>2013-10-09T02:30:48Z</dc:date>
    </item>
    <item>
      <title>Re: nic bonding &amp; lacp - Linux</title>
      <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6230945#M19274</link>
      <description>&lt;P&gt;I was talking with an HP sales engineer, and came to the same conclusion.&amp;nbsp; I have on hand four 1/10Gb-F switches.&amp;nbsp; So, the plan was to replace the GbE2c's with those.&amp;nbsp; Each blade has the two on-board NICs and a mezzanine NIC.&amp;nbsp; He was telling me I can set that up with two bonds on the blades and an LACP group from each switch to the Cisco and that should work in an active-active state.&amp;nbsp; So, that is the plan.&amp;nbsp; I need to upgrade the firmware on the 1/10Gb-F switches and then schedule some downtime to replace the GbE2C's and run through the Flex setup.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the reply!&amp;nbsp; :)&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2013 12:20:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6230945#M19274</guid>
      <dc:creator>BradV</dc:creator>
      <dc:date>2013-10-09T12:20:49Z</dc:date>
    </item>
    <item>
      <title>Re: nic bonding &amp; lacp - Linux</title>
      <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6235913#M19281</link>
      <description>Brad, you will have the same limitation with VC modules as with the GBe2cs when it comes to LACP.&lt;BR /&gt;&lt;BR /&gt;If you are in the USA, what was the Sales Engineer's name? I work on one of the PreSales teams in Los Angeles.</description>
      <pubDate>Tue, 15 Oct 2013 04:15:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6235913#M19281</guid>
      <dc:creator>Casper42</dc:creator>
      <dc:date>2013-10-15T04:15:21Z</dc:date>
    </item>
    <item>
      <title>Re: nic bonding &amp; lacp - Linux</title>
      <link>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6245187#M19311</link>
      <description>&lt;P&gt;I'm reluctant to identify him on a public board, but his first name is Sam.&amp;nbsp; We did talk a little more and figured out that even though we could get the outbound bonds to work in an active-active configuration, the inbound traffic will still be limited to one link.&amp;nbsp; That won't solve the problem which is getting the indexes or whatever (I'm not all that hadoop savy) from hadoop on the DL380s to the BL460c's.&amp;nbsp; It looks like we'll have to wait until we can upgrade the blades and the switches to get more bandwidth.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the help!&lt;/P&gt;</description>
      <pubDate>Wed, 23 Oct 2013 10:19:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/bladesystem-general/nic-bonding-amp-lacp-linux/m-p/6245187#M19311</guid>
      <dc:creator>BradV</dc:creator>
      <dc:date>2013-10-23T10:19:15Z</dc:date>
    </item>
  </channel>
</rss>

