<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: network bonding -- looking for optimal performance in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582522#M82519</link>
    <description>Are the existing channels saturated enough? If not, you could have more options in terms using the new NICs as backup. I usually go with mode 6 (adaptive load balancing, does not require switch support), however I've only used 2 NICs max, so a 4 NIC setup may need something different. Anyway, &lt;A href="http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options" target="_blank"&gt;http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options&lt;/A&gt; has a detailed intel on bonding. HTH.</description>
    <pubDate>Mon, 15 Feb 2010 09:41:13 GMT</pubDate>
    <dc:creator>Modris Bremze</dc:creator>
    <dc:date>2010-02-15T09:41:13Z</dc:date>
    <item>
      <title>network bonding -- looking for optimal performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582520#M82517</link>
      <description>Good morning all;&lt;BR /&gt;&lt;BR /&gt;We are running a application that relies on good network and nfs performance (Oracle EBS 11.5.10).  Currently I have two of the four network cards bonded (bond 0)togther.  I would like to bond two additional network cards to the two I have already bonded.  The question I have is this a good way to go?  Should I do something different?  Should I change my existing bond mode from 0 to ?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thank you for your input..&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;THank you for you help, greatly appreciate it.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Feb 2010 17:18:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582520#M82517</guid>
      <dc:creator>wvsa</dc:creator>
      <dc:date>2010-02-11T17:18:39Z</dc:date>
    </item>
    <item>
      <title>Re: network bonding -- looking for optimal performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582521#M82518</link>
      <description>Personally, I am not all that fond of mode 0 (round robin).  Yes, it will allow a single "flow" (eg TCP connection) to make use of more than one link in the bond, but it also means that traffic on that flow will be reordered.&lt;BR /&gt;&lt;BR /&gt;Indeed, TCP will "deal" with that - every out of order segment will result in an immediate ACK rather than waiting to "ack every other." This will increase CPU utilization on both sides.&lt;BR /&gt;&lt;BR /&gt;If there are enough of these out of order TCP segments, it can trigger a spurrious "fast retransmit" - with only two links in the bond that is unlikely, it does become more likely with four links in the bond - it takes three "duplicate ACKs" to trigger a fast retransmit.&lt;BR /&gt;&lt;BR /&gt;If you have a situation where you need a single stream/flow/connection to go faster than a single GbE link, as unpleasant as the prices might be, I would suggest a 10G link.&lt;BR /&gt;&lt;BR /&gt;If you have many TCP connections, you might consider one of the other bonding modes.  Modulo the mode you pick, how things are distributed on *inbound* will be up to the switch - which may also then be the case for your NFS server - how inbound traffic gets spread across its links in its bond can depend on the settings in the switch.</description>
      <pubDate>Fri, 12 Feb 2010 00:49:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582521#M82518</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2010-02-12T00:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: network bonding -- looking for optimal performance</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582522#M82519</link>
      <description>Are the existing channels saturated enough? If not, you could have more options in terms using the new NICs as backup. I usually go with mode 6 (adaptive load balancing, does not require switch support), however I've only used 2 NICs max, so a 4 NIC setup may need something different. Anyway, &lt;A href="http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options" target="_blank"&gt;http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options&lt;/A&gt; has a detailed intel on bonding. HTH.</description>
      <pubDate>Mon, 15 Feb 2010 09:41:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-bonding-looking-for-optimal-performance/m-p/4582522#M82519</guid>
      <dc:creator>Modris Bremze</dc:creator>
      <dc:date>2010-02-15T09:41:13Z</dc:date>
    </item>
  </channel>
</rss>

