<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Infiniband bonding in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681021#M81022</link>
    <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;quite some time since I last posted here... :(&lt;BR /&gt;&lt;BR /&gt;I'm looking for someone who has working experience on RHEL and Infiniband, for some advice - I have a Cisco Infinband Gateway / Switch and a few servers with SDR/DDR hcas in them.&lt;BR /&gt;&lt;BR /&gt;In the Centos5.4 relnotes I see that the "new" infinband bonding module now actually does come with loadbalancing / multipath support for IPoIB, what means I could really push over 20Gbit/s out and into the servers. Unfortunately, the whole thing is as undocumented as it gets.&lt;BR /&gt;&lt;BR /&gt;First things first - right now I want to just build some IB bonding interface to test with, using the stock tools in RHEL, but even for that I find the documentation totally contradictive.&lt;BR /&gt;&lt;BR /&gt;I wonder if one of you can tell me where to find a good documentation on the bonding bit, or has another hint. &lt;BR /&gt;&lt;BR /&gt;I'm sure I can go the rest of the way from there :)&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Flo</description>
    <pubDate>Mon, 30 Aug 2010 16:23:36 GMT</pubDate>
    <dc:creator>Florian Heigl (new acc)</dc:creator>
    <dc:date>2010-08-30T16:23:36Z</dc:date>
    <item>
      <title>Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681021#M81022</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;quite some time since I last posted here... :(&lt;BR /&gt;&lt;BR /&gt;I'm looking for someone who has working experience on RHEL and Infiniband, for some advice - I have a Cisco Infinband Gateway / Switch and a few servers with SDR/DDR hcas in them.&lt;BR /&gt;&lt;BR /&gt;In the Centos5.4 relnotes I see that the "new" infinband bonding module now actually does come with loadbalancing / multipath support for IPoIB, what means I could really push over 20Gbit/s out and into the servers. Unfortunately, the whole thing is as undocumented as it gets.&lt;BR /&gt;&lt;BR /&gt;First things first - right now I want to just build some IB bonding interface to test with, using the stock tools in RHEL, but even for that I find the documentation totally contradictive.&lt;BR /&gt;&lt;BR /&gt;I wonder if one of you can tell me where to find a good documentation on the bonding bit, or has another hint. &lt;BR /&gt;&lt;BR /&gt;I'm sure I can go the rest of the way from there :)&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Flo</description>
      <pubDate>Mon, 30 Aug 2010 16:23:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681021#M81022</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2010-08-30T16:23:36Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681022#M81023</link>
      <description>For active/passive/failover bond mode.&lt;BR /&gt;(my voltaire switches do not support active/active/balanced mode.)&lt;BR /&gt;&lt;BR /&gt;/etc/modprobe.conf&lt;BR /&gt;alias bond0 bonding&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/sysconfig/network-scripts/ifcfg-bond0&lt;BR /&gt;DEVICE=bond0&lt;BR /&gt;IPADDR=10.10.1.8&lt;BR /&gt;NETMASK=255.255.255.0&lt;BR /&gt;BROADCAST=10.10.1.255&lt;BR /&gt;ONBOOT=YES&lt;BR /&gt;BOOTPROTO=none&lt;BR /&gt;USERCTL=no&lt;BR /&gt;TYPE=Bonding&lt;BR /&gt;MTU=65520&lt;BR /&gt;BONDING_OPTS=" mode=1 miimon=100 primary=ib2"&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/sysconfig/network-scripts/ifcfg-ib0&lt;BR /&gt;DEVICE=ib0&lt;BR /&gt;USERCTL=no&lt;BR /&gt;ONBOOT=yes&lt;BR /&gt;MASTER=bond1&lt;BR /&gt;BOOTPROTO=none&lt;BR /&gt;SLAVE=yes&lt;BR /&gt;TYPE=InfiniBand&lt;BR /&gt;HOTPLUG=no&lt;BR /&gt;CONNECTED_MODE=yes&lt;BR /&gt;MTU=65520&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/sysconfig/network-scripts/ifcfg-ib2&lt;BR /&gt;DEVICE=ib2&lt;BR /&gt;USERCTL=no&lt;BR /&gt;ONBOOT=yes&lt;BR /&gt;MASTER=bond1&lt;BR /&gt;BOOTPROTO=none&lt;BR /&gt;SLAVE=yes&lt;BR /&gt;TYPE=InfiniBand&lt;BR /&gt;PRIMARY=yes&lt;BR /&gt;HOTPLUG=no&lt;BR /&gt;CONNECTED_MODE=yes&lt;BR /&gt;MTU=65520&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;cat /proc/net/bonding/bond0&lt;BR /&gt;&lt;BR /&gt;Ethernet Channel Bonding Driver: v3.4.0 (October 7, 2008)&lt;BR /&gt;&lt;BR /&gt;Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active)&lt;BR /&gt;Primary Slave: ib2&lt;BR /&gt;Currently Active Slave: ib2&lt;BR /&gt;MII Status: up&lt;BR /&gt;MII Polling Interval (ms): 100&lt;BR /&gt;Up Delay (ms): 0&lt;BR /&gt;Down Delay (ms): 0&lt;BR /&gt;&lt;BR /&gt;Slave Interface: ib0&lt;BR /&gt;MII Status: up&lt;BR /&gt;Link Failure Count: 0&lt;BR /&gt;Permanent HW addr: 80:00:00:48:fe:80&lt;BR /&gt;&lt;BR /&gt;Slave Interface: ib2&lt;BR /&gt;MII Status: up&lt;BR /&gt;Link Failure Count: 0&lt;BR /&gt;Permanent HW addr: 80:00:00:48:fe:80</description>
      <pubDate>Mon, 30 Aug 2010 17:50:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681022#M81023</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2010-08-30T17:50:52Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681023#M81024</link>
      <description>10pts just for CONNECTED_MODE=yes! :)&lt;BR /&gt;&lt;BR /&gt;I'll post my results tomorrow.</description>
      <pubDate>Mon, 30 Aug 2010 23:03:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681023#M81024</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2010-08-30T23:03:21Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681024#M81025</link>
      <description>yep.. i had issue with this as the openib doc stated to enter this in the /etc/infiniband/openib.conf&lt;BR /&gt;&lt;BR /&gt;this did not work..&lt;BR /&gt;&lt;BR /&gt;after many moons of searching I found in /etc/sysconfig/ifup-ib script that CONNECTED_MODE was set in the ifcfg-ibX files.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 31 Aug 2010 14:10:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681024#M81025</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2010-08-31T14:10:42Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681025#M81026</link>
      <description>Wow, this is a whole new area to learn about and find errors in!&lt;BR /&gt;&lt;BR /&gt;after correcting the typo in your config (device=bond0 in bond1 config file) I managed to re-type it into my own config.&lt;BR /&gt;&lt;BR /&gt;that meant the bond0 ethernet bond switched, among other things it's mode.&lt;BR /&gt;&lt;BR /&gt;that's all sorted now and the infiniband bond1 looks good. I can't ping through it though. &lt;BR /&gt;&lt;BR /&gt;one thing, does yours also say it is the "ethernet channel bonding driver"? &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[root@waxh0003 ~]# cat /proc/net/bonding/bond1&lt;BR /&gt;Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)&lt;BR /&gt;&lt;BR /&gt;Bonding Mode: fault-tolerance (active-backup) (fail_over_mac)&lt;BR /&gt;Primary Slave: ib0&lt;BR /&gt;Currently Active Slave: ib0&lt;BR /&gt;MII Status: up&lt;BR /&gt;MII Polling Interval (ms): 100&lt;BR /&gt;Up Delay (ms): 0&lt;BR /&gt;Down Delay (ms): 0&lt;BR /&gt;&lt;BR /&gt;Slave Interface: ib0&lt;BR /&gt;MII Status: up&lt;BR /&gt;Link Failure Count: 0&lt;BR /&gt;Permanent HW addr: 80:00:04:04:fe:80&lt;BR /&gt;&lt;BR /&gt;Slave Interface: ib1&lt;BR /&gt;MII Status: up&lt;BR /&gt;Link Failure Count: 0&lt;BR /&gt;Permanent HW addr: 80:00:04:05:fe:80&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;While pinging I can see the counters for ib0 going up (both rx and tx) and ib1, being a good girl, waits for a failover event.&lt;BR /&gt;&lt;BR /&gt;next thing I'll check now is the connected mode setting on the other host, maybe that's all that's to blame. Thanks so much for your help, I'm not sure I'd ever have noticed the ifup-ib file to start with.</description>
      <pubDate>Tue, 31 Aug 2010 21:41:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681025#M81026</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2010-08-31T21:41:08Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681026#M81027</link>
      <description>sorry about any errors...&lt;BR /&gt;&lt;BR /&gt;my configuration has bond0 as an ethernet bond and bond1 as an infiniband bond.&lt;BR /&gt;&lt;BR /&gt;I attempted to correct the bond1 to bond0 referencences in my post in order to simplify / not have to explain the ethernet..  sorry bout that.. should have just left it..&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Sep 2010 15:04:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681026#M81027</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2010-09-01T15:04:18Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681027#M81028</link>
      <description>Thats great - our setup seems identical by all but the switch vendor.&lt;BR /&gt;&lt;BR /&gt;unfortunately, I can't ping though ;))&lt;BR /&gt;&lt;BR /&gt;do you see anything wrong with my modprobe.conf here?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;alias bond1 bonding&lt;BR /&gt;alias ib0 ib_ipoib&lt;BR /&gt;alias ib1 ib_ipoib&lt;BR /&gt;&lt;BR /&gt;alias bond0 bonding&lt;BR /&gt;alias eth0 e1000&lt;BR /&gt;alias eth1 e1000e&lt;BR /&gt;&lt;BR /&gt;options bonding max_bonds=4&lt;BR /&gt;&lt;BR /&gt;alias scsi_hostadapter ahci&lt;BR /&gt;alias scsi_hostadapter1 usb-storage&lt;BR /&gt;&lt;BR /&gt;options netloop nloopbacks=0&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Sep 2010 17:35:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681027#M81028</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2010-09-01T17:35:31Z</dc:date>
    </item>
    <item>
      <title>Re: Infiniband bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681028#M81029</link>
      <description>Is there an IP stack on your bond interface ?&lt;BR /&gt;can you ping locally but not remote server on fabric ? if so then i would guess it is a ib switch routing issue.  There are some ib diag utils that can be installed which can help view your fabric and port connections.&lt;BR /&gt;infiniband-diag&lt;BR /&gt;&lt;BR /&gt;modprobe is below, nothing special.&lt;BR /&gt;&lt;BR /&gt;alias bond0 bonding&lt;BR /&gt;options bonding max_bonds=2&lt;BR /&gt;alias bond1 bonding&lt;BR /&gt;primary=ib0&lt;BR /&gt;alias eth0 bnx2&lt;BR /&gt;alias eth1 bnx2&lt;BR /&gt;alias eth2 bnx2&lt;BR /&gt;alias eth3 bnx2&lt;BR /&gt;alias scsi_hostadapter cciss&lt;BR /&gt;alias scsi_hostadapter1 ata_piix&lt;BR /&gt;alias scsi_hostadapter2 qla2xxx&lt;BR /&gt;alias scsi_hostadapter3 usb-storage&lt;BR /&gt;options qla2xxx ql2xmaxqdepth=16 qlport_down_retry=10 ql2xloginretrycount=30&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Sep 2010 18:02:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/infiniband-bonding/m-p/4681028#M81029</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2010-09-01T18:02:07Z</dc:date>
    </item>
  </channel>
</rss>

