<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Bonding Question in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119902#M83328</link>
    <description>Thanks for the comment!  What is the next NIC type that I can use?  10gigE?  Do your saying that I should go back down to using 2 NIC's instead of four because it's creating too much traffic for the server to sort through?&lt;BR /&gt;&lt;BR /&gt;Thanks for the help!</description>
    <pubDate>Wed, 02 Jan 2008 20:08:12 GMT</pubDate>
    <dc:creator>Star Dust</dc:creator>
    <dc:date>2008-01-02T20:08:12Z</dc:date>
    <item>
      <title>Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119896#M83322</link>
      <description>I'm having problem with a DL380 server running Redhat 4.5 that I'm trying to bond 4 broadcomm adapters together but it doesn't seem to be as fast as I'd hoped.  I'm starting to think that I misunderstood the use of bonding.  I was thinking that bonding would give me a massive network pipe.  Is bonding more for failover rather then speed? &lt;BR /&gt;&lt;BR /&gt;I've done a few tests between two servers each  with four NIC cards bonded and the data transfer  rate is 40mbs while using no bonding it's 50mbs.&lt;BR /&gt;&lt;BR /&gt;Would someone mind explaining it to a simpleton?  :)</description>
      <pubDate>Fri, 21 Dec 2007 05:34:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119896#M83322</guid>
      <dc:creator>Star Dust</dc:creator>
      <dc:date>2007-12-21T05:34:01Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119897#M83323</link>
      <description>Bonding provides round robin (mode=0) for link aggregation, but the switch must be configured to allow this. AFAIK, cisco calls to this EtherChannel.</description>
      <pubDate>Fri, 21 Dec 2007 13:10:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119897#M83323</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-12-21T13:10:11Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119898#M83324</link>
      <description>also be aware that you can configure this for EITHER failover or all NIC's active.&lt;BR /&gt;check your configuration</description>
      <pubDate>Fri, 21 Dec 2007 13:48:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119898#M83324</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2007-12-21T13:48:26Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119899#M83325</link>
      <description>Bonding can provide more bandwidth, which is not quite the same thing as increased speed. &lt;BR /&gt;&lt;BR /&gt;It's the equivalent of adding more lanes to a highway without raising the speed limit: four lanes (NICs) can accommodate more trucks (packets) than one, but a single truck (packet) won't get from Point A to Point B any faster than before. The increased complexity may even make it a bit slower.&lt;BR /&gt;&lt;BR /&gt;Of course, if the original single lane (NIC) was badly congested, adding more lanes can improve the situation for single trucks (packets) too.&lt;BR /&gt;&lt;BR /&gt;If your test was designed specifically to measure speed, it may have failed to utilize the increased bandwidth: try running multiple copies of your test in parallel, then see the results.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Fri, 21 Dec 2007 14:34:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119899#M83325</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2007-12-21T14:34:45Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119900#M83326</link>
      <description>Wow... great explanation... That really helps...&lt;BR /&gt;&lt;BR /&gt;I have created an EitherChannel on my switch, but I'm not sure if I set it up correctly.  I took the four ports that I have my servers connected into and put them in Either-group 1 using the following command: &lt;BR /&gt;&lt;BR /&gt;"Switch (config-if)#channel-group 1 mode on"&lt;BR /&gt;&lt;BR /&gt;Does that look right or am I supposed to use mode auto or desirable?&lt;BR /&gt;&lt;BR /&gt;Thanks again for all the help guys and gals... :) &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 21 Dec 2007 15:28:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119900#M83326</guid>
      <dc:creator>Star Dust</dc:creator>
      <dc:date>2007-12-21T15:28:15Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119901#M83327</link>
      <description>(ab)using mode-rr to increase the performance of a single connection has, well, "issues..."  the trafic _will_ become reordered, and if it becomes sufficiently reordered, it triggers TCP's "fast" lost segment detection.  This then supresses the size of the congestion window calculated by the sender, which leads to poor throughput.  This is coupled with the vast increase in TCP ACKs from all the out-of-order traffic.&lt;BR /&gt;&lt;BR /&gt;In my opinion the best way to increase the speed of a single connection is to upgrade to the next faster NIC type.  Admittedly that isn't always possible, but it is better.&lt;BR /&gt;&lt;BR /&gt;You _might_ get some relief by setting net.ipv4.tcp_reordering to something rather larger than the number of NICs in the bond but I consider that little more than a kludge.</description>
      <pubDate>Mon, 31 Dec 2007 19:02:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119901#M83327</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-12-31T19:02:30Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119902#M83328</link>
      <description>Thanks for the comment!  What is the next NIC type that I can use?  10gigE?  Do your saying that I should go back down to using 2 NIC's instead of four because it's creating too much traffic for the server to sort through?&lt;BR /&gt;&lt;BR /&gt;Thanks for the help!</description>
      <pubDate>Wed, 02 Jan 2008 20:08:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119902#M83328</guid>
      <dc:creator>Star Dust</dc:creator>
      <dc:date>2008-01-02T20:08:12Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119903#M83329</link>
      <description>If you are using 1 gig then indeed the next step up would be 10 gig :)&lt;BR /&gt;&lt;BR /&gt;If you do stick with mode-rr bonding, you could indeed try using just two links in the bond rather than four and see if things are better.  Or you can try tweaking that sysctl.  As you try things-out, keep looking at the netstat -t (iirc that is the syntax for TCP stats) stats.  &lt;BR /&gt;&lt;BR /&gt;It may also be necessary/desirable to enable larger socket buffers/windows to get things going faster.  &lt;BR /&gt;&lt;BR /&gt;Having said all that, it seems that without the bonding, you are still only running at half of link rate on the single connection test.  I think it would be good to diagnose that further.</description>
      <pubDate>Wed, 02 Jan 2008 20:53:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119903#M83329</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-01-02T20:53:00Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119904#M83330</link>
      <description>OK, I've narrowed down problem here.  It looks like after I created a bond, RHEL will sometimes rearrange them on reboot for some reason.  For example:&lt;BR /&gt;&lt;BR /&gt;eth0 = IPADDRESS=10.10.10.20&lt;BR /&gt;eth1 = onboot=no&lt;BR /&gt;Bond0 = eth2, eth3, eth4, eth5&lt;BR /&gt;&lt;BR /&gt;After I reboot the machine the the ifcfg files stay the same, but the hardware will move around!  In other words eth0 may be eth4 on the next reboot which is why my bonding isn't working.  I figured this out after disabling the embedded NIC adapters (eth0 and eth1) and now my bond works beautify!&lt;BR /&gt;&lt;BR /&gt;Why is it doing this and is it possible to assign hardware to use a certain ifcfg file?</description>
      <pubDate>Thu, 03 Jan 2008 23:35:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119904#M83330</guid>
      <dc:creator>Star Dust</dc:creator>
      <dc:date>2008-01-03T23:35:39Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119905#M83331</link>
      <description>is there a MAC address in each of the ifcfg files?  if so then that _should_ be causing each interface to have the same name after each boot.</description>
      <pubDate>Fri, 04 Jan 2008 00:51:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119905#M83331</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-01-04T00:51:54Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119906#M83332</link>
      <description>Note that in RHEL4 or above, the ifcfg files can have two directives with MAC addresses, with very different meanings:&lt;BR /&gt;&lt;BR /&gt;DEVICE=eth0&lt;BR /&gt;...&lt;BR /&gt;HWADDR=&lt;MAC-ADDRESS&gt;&lt;BR /&gt;&lt;BR /&gt;causes the device identified by the &lt;MAC-ADDRESS&gt; to be automatically renamed to eth0, if it isn't that by default. &lt;BR /&gt;&lt;BR /&gt;DEVICE=eth1&lt;BR /&gt;...&lt;BR /&gt;MACADDR=&lt;MAC-ADDRESS&gt;&lt;BR /&gt;changes the MAC address of device eth1 to &lt;MAC-ADDRESS&gt;.&lt;BR /&gt;&lt;BR /&gt;Mixing up these two is likely to cause great confusion.&lt;BR /&gt;&lt;BR /&gt;If you're using HWADDR to rename an interface to e.g. eth0 and another interface is already named as eth0, it will get a temporary name, which will look very strange. So if you use this feature to re-arrange your network interfaces, specify the correct HWADDR directives for _all_ network interfaces of your system.&lt;BR /&gt;&lt;BR /&gt;If you remove all HWADDR directives from the ifcfg files, the network configuration will again work like RHEL3 and classic RedHats.&lt;BR /&gt;&lt;BR /&gt;MK&lt;/MAC-ADDRESS&gt;&lt;/MAC-ADDRESS&gt;&lt;/MAC-ADDRESS&gt;&lt;/MAC-ADDRESS&gt;</description>
      <pubDate>Fri, 04 Jan 2008 12:45:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119906#M83332</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2008-01-04T12:45:39Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding Question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119907#M83333</link>
      <description>You guys rock...  I added the HWADDR= into each file and now it seems stable as a rock... :)  Thank you all very much for the help!  &lt;BR /&gt;&lt;BR /&gt;I'm now doing a little testing with it and wondered if I need to add "options bond0 miimon=100 mode=balance-rr" to my modprobe.conf file.  Some documentation says to do it and some doesn't at all.  &lt;BR /&gt;&lt;BR /&gt;Also, is there anywhere else that I can do some tweaking?</description>
      <pubDate>Sat, 05 Jan 2008 16:29:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-question/m-p/4119907#M83333</guid>
      <dc:creator>Star Dust</dc:creator>
      <dc:date>2008-01-05T16:29:26Z</dc:date>
    </item>
  </channel>
</rss>

