<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Network teaming under linux in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929192#M83695</link>
    <description>Crap... I lead you astray, and replied before my second cup of coffee... miimon is something different. the miimon is how often it monitors the interface to make sure they're up, in milliseconds. miimon=100 means that it polls every 100ms, miimon=1000 means every 1000ms (1second). 100 should be fine with good network cards, but if you have cheapies, they sometimes can't take 10 polls a second without detriments to the performance. I had this happen with $10 Realteks in a built from parts laying around test box. &lt;BR /&gt;&lt;BR /&gt;Sorry I sent you the wrong way before.</description>
    <pubDate>Wed, 27 Aug 2008 12:17:54 GMT</pubDate>
    <dc:creator>JHoover</dc:creator>
    <dc:date>2008-08-27T12:17:54Z</dc:date>
    <item>
      <title>Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929184#M83687</link>
      <description>Our company is purchasing some BL465c blade servers and I'm trying to read up as much as I can on them so that we're ready when they arrive.  One thing we're interested in making use of is the network teaming capabilities.  I've found detailed documentation on utilizing it under Windows but we're running linux (CentOS, a RedHat clone).  According to the OS Support section of &lt;A href="http://h18004.www1.hp.com/products/servers/networking/teaming.html" target="_blank"&gt;http://h18004.www1.hp.com/products/servers/networking/teaming.html&lt;/A&gt; linux is supported for some of the basic teaming types, but I've been unable to locate any detailed documentation on how to set it up.  Can somebody point me to documentation on how to set up teaming under linux?</description>
      <pubDate>Fri, 19 Jan 2007 11:36:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929184#M83687</guid>
      <dc:creator>BruceP_2</dc:creator>
      <dc:date>2007-01-19T11:36:58Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929185#M83688</link>
      <description>thread placed under completely the wrong forum, moved to more appropriate forum</description>
      <pubDate>Fri, 19 Jan 2007 12:29:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929185#M83688</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2007-01-19T12:29:49Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929186#M83689</link>
      <description>You'll proabably want to see if your NICs have a specific driver for "bonding". Once that driver is loaded (or if the driver you currently use is certified for bonding connections), you should be able to set up 2 or more cards into a single bond. ie eth0 and eth1 become bond0. You can then set how you want them to treat the bond. &lt;BR /&gt;&lt;BR /&gt;Blatently stolen, but it's the correct procedure:&lt;BR /&gt;&lt;BR /&gt;Stolen from here:&lt;A href="http://www.databasejournal.com/features/oracle/article.php/3652706" target="_blank"&gt;http://www.databasejournal.com/features/oracle/article.php/3652706&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;To setup bonding involves simple four tasks.&lt;BR /&gt;&lt;BR /&gt;TASK 1: First, you need to create bond0 config file:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# vi /etc/sysconfig/network-scripts/ifcfg-bond0 Append following lines to DEVICE=bond0&lt;BR /&gt;BOOTPROTO=static&lt;BR /&gt;ONBOOT=yes&lt;BR /&gt;IPADDR=11.65.21.82&lt;BR /&gt;NETMASK=255.255.255.0&lt;BR /&gt;GATEWAY=11.65.21.1&lt;BR /&gt;USERCTL=no&lt;BR /&gt;&lt;BR /&gt;TASK 2: Modify eth0 and eth1 config files:&lt;BR /&gt;&lt;BR /&gt;Open both configuration using VI text editor and make sure file read as follows for eth0 and eth1 interfaces.&lt;BR /&gt;&lt;BR /&gt;[root@linuxhost network-scripts]# more ifcfg-eth0&lt;BR /&gt;DEVICE=eth0&lt;BR /&gt;ONBOOT=no&lt;BR /&gt;MASTER=bond0&lt;BR /&gt;SLAVE=yes&lt;BR /&gt;USERCTL=no&lt;BR /&gt;[root@linuxhost network-scripts]# more ifcfg-eth1&lt;BR /&gt;DEVICE=eth1&lt;BR /&gt;ONBOOT=no&lt;BR /&gt;MASTER=bond0&lt;BR /&gt;SLAVE=yes&lt;BR /&gt;USERCTL=no&lt;BR /&gt;&lt;BR /&gt;TASK 3: Load driver module:&lt;BR /&gt;&lt;BR /&gt;Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify the kernel modules configuration file so that it looks like the one below.&lt;BR /&gt;&lt;BR /&gt;[root@linuxhost network-scripts]# more /etc/modprobe.conf&lt;BR /&gt;alias bond0 bonding&lt;BR /&gt;options bond0 mode=balance-alb miimon=100&lt;BR /&gt;&lt;BR /&gt;TASK 4: Test configuration by modprobe and service network restart commands.&lt;BR /&gt;&lt;BR /&gt;[root@linuxhost network-scripts]#  modprobe bonding&lt;BR /&gt;[root@linuxhost network-scripts]#   service network restart&lt;BR /&gt;&lt;BR /&gt;Let me know if you need more help.&lt;BR /&gt;</description>
      <pubDate>Fri, 19 Jan 2007 17:36:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929186#M83689</guid>
      <dc:creator>JohnHoover</dc:creator>
      <dc:date>2007-01-19T17:36:53Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929187#M83690</link>
      <description>on a sufficiently contemporary distro (eg 2.6) there should be a bonding.txt file on the system you can find via the find command.  It will have lots of useful information about the bonding driver you will use to setup link bonding/teaming/aggregation</description>
      <pubDate>Fri, 19 Jan 2007 21:04:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929187#M83690</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-01-19T21:04:27Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929188#M83691</link>
      <description>Hi John,&lt;BR /&gt;&lt;BR /&gt;Thans for the info.&lt;BR /&gt;&lt;BR /&gt;I am new Linux and have just managed to install Fedora Core 6 on HP DL 360G5 server.&lt;BR /&gt;&lt;BR /&gt;Can you please help me create network team(Load balancing and failover) in fedora 6.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Abdul</description>
      <pubDate>Wed, 21 Mar 2007 07:44:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929188#M83691</guid>
      <dc:creator>AbdulKhan</dc:creator>
      <dc:date>2007-03-21T07:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929189#M83692</link>
      <description>Please can u explain the meaning of the following line: &lt;BR /&gt;&lt;BR /&gt;options bond0 mode=balance-alb miimon=100 ?&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;&lt;BR /&gt;K&lt;BR /&gt;</description>
      <pubDate>Wed, 27 Aug 2008 10:09:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929189#M83692</guid>
      <dc:creator>rpmXperts</dc:creator>
      <dc:date>2008-08-27T10:09:36Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929190#M83693</link>
      <description>Abdul, sorry to take so long to get back. The above instructions should work fine in FC6, so long as you have decent network cards (Intel or Broadcom chipsets are the two I've had good results with).&lt;BR /&gt;&lt;BR /&gt;Quote:&lt;BR /&gt;Please can u explain the meaning of the following line: &lt;BR /&gt;&lt;BR /&gt;options bond0 mode=balance-alb miimon=100 ?&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;&lt;BR /&gt;K&lt;BR /&gt;&lt;BR /&gt;These are the different options for the bond0 interface. The "mode=balance-alb" means that the bonding driver will try to Active Load Balance (alb) the data across all of the ethX interface that are part of bond0. "miimon=100" will force the bond0 interface to 100Mbps. &lt;BR /&gt;&lt;BR /&gt;I lost the login I had before, and due to an email address change at work, couldn't recover the password, hence the change in username as well... &lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;John</description>
      <pubDate>Wed, 27 Aug 2008 11:41:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929190#M83693</guid>
      <dc:creator>JHoover</dc:creator>
      <dc:date>2008-08-27T11:41:17Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929191#M83694</link>
      <description>Thx for the response,&lt;BR /&gt;&lt;BR /&gt;So miimon=100 means 100 FDX mbps&lt;BR /&gt;and miimon=1000 means 1000 Mbps&lt;BR /&gt;&lt;BR /&gt;if i don't include miimon than it is on AUTO ?&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;&lt;BR /&gt;RpmX</description>
      <pubDate>Wed, 27 Aug 2008 12:07:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929191#M83694</guid>
      <dc:creator>rpmXperts</dc:creator>
      <dc:date>2008-08-27T12:07:59Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929192#M83695</link>
      <description>Crap... I lead you astray, and replied before my second cup of coffee... miimon is something different. the miimon is how often it monitors the interface to make sure they're up, in milliseconds. miimon=100 means that it polls every 100ms, miimon=1000 means every 1000ms (1second). 100 should be fine with good network cards, but if you have cheapies, they sometimes can't take 10 polls a second without detriments to the performance. I had this happen with $10 Realteks in a built from parts laying around test box. &lt;BR /&gt;&lt;BR /&gt;Sorry I sent you the wrong way before.</description>
      <pubDate>Wed, 27 Aug 2008 12:17:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929192#M83695</guid>
      <dc:creator>JHoover</dc:creator>
      <dc:date>2008-08-27T12:17:54Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929193#M83696</link>
      <description>Do a find for the bonding.txt file - it will explain all the options.  IIRC it will be somewhere under /usr.  On an SLES10sp2 system I have at my finger tips it is under:&lt;BR /&gt;&lt;BR /&gt;/usr/src/&lt;RELEASENAME&gt;/Documentation/networking/bonding.txt&lt;BR /&gt;&lt;BR /&gt;&lt;/RELEASENAME&gt;</description>
      <pubDate>Wed, 27 Aug 2008 14:59:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929193#M83696</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-08-27T14:59:26Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929194#M83697</link>
      <description>you will need to install the kernel-docs.</description>
      <pubDate>Wed, 27 Aug 2008 17:16:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929194#M83697</guid>
      <dc:creator>Court Campbell</dc:creator>
      <dc:date>2008-08-27T17:16:27Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929195#M83698</link>
      <description>Thanks pal, You confirm my thinking. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 28 Aug 2008 04:36:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929195#M83698</guid>
      <dc:creator>rpmXperts</dc:creator>
      <dc:date>2008-08-28T04:36:50Z</dc:date>
    </item>
    <item>
      <title>Re: Network teaming under linux</title>
      <link>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929196#M83699</link>
      <description>Well, &lt;BR /&gt;&lt;BR /&gt;I guess I could have put a link to a bonding.txt doc:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt" target="_blank"&gt;http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;but i thought that is what google was for.</description>
      <pubDate>Thu, 28 Aug 2008 16:19:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/network-teaming-under-linux/m-p/3929196#M83699</guid>
      <dc:creator>Court Campbell</dc:creator>
      <dc:date>2008-08-28T16:19:44Z</dc:date>
    </item>
  </channel>
</rss>

