<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Bonding infiniband interfaces in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/bonding-infiniband-interfaces/m-p/3752400#M85998</link>
    <description>Hi all &lt;BR /&gt;&lt;BR /&gt;I am trying to bond two infiniband interfaces ib0 and ib1 but when I try to perform the bonding I get the following message:&lt;BR /&gt;&lt;BR /&gt;ifup ibbond0&lt;BR /&gt;bonding device ibbond0 does not seem to be present, delaying initialization.&lt;BR /&gt;&lt;BR /&gt;However when you do an lsmod you can see the module loaded into the kernel and is used to bond 2 ethernet cards   &lt;BR /&gt;&lt;BR /&gt;Or is it a case that this module can only be used to bond ethernet nics.&lt;BR /&gt;&lt;BR /&gt;Kernel 2.6.9-22 using openinfiniband.org patches on RHEL 4 ES</description>
    <pubDate>Thu, 16 Mar 2006 06:09:40 GMT</pubDate>
    <dc:creator>Greg Rudd01</dc:creator>
    <dc:date>2006-03-16T06:09:40Z</dc:date>
    <item>
      <title>Bonding infiniband interfaces</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-infiniband-interfaces/m-p/3752400#M85998</link>
      <description>Hi all &lt;BR /&gt;&lt;BR /&gt;I am trying to bond two infiniband interfaces ib0 and ib1 but when I try to perform the bonding I get the following message:&lt;BR /&gt;&lt;BR /&gt;ifup ibbond0&lt;BR /&gt;bonding device ibbond0 does not seem to be present, delaying initialization.&lt;BR /&gt;&lt;BR /&gt;However when you do an lsmod you can see the module loaded into the kernel and is used to bond 2 ethernet cards   &lt;BR /&gt;&lt;BR /&gt;Or is it a case that this module can only be used to bond ethernet nics.&lt;BR /&gt;&lt;BR /&gt;Kernel 2.6.9-22 using openinfiniband.org patches on RHEL 4 ES</description>
      <pubDate>Thu, 16 Mar 2006 06:09:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-infiniband-interfaces/m-p/3752400#M85998</guid>
      <dc:creator>Greg Rudd01</dc:creator>
      <dc:date>2006-03-16T06:09:40Z</dc:date>
    </item>
    <item>
      <title>Re: Bonding infiniband interfaces</title>
      <link>https://community.hpe.com/t5/operating-system-linux/bonding-infiniband-interfaces/m-p/3752401#M85999</link>
      <description>Hi Greg,&lt;BR /&gt;&lt;BR /&gt;In my opinion, bonding two or more infiniband is impossible because it has not be designed for.&lt;BR /&gt;Infiniband is mainly used in HPC (Clustering) 'cause it gaves low ping which are mandatory for good performances.&lt;BR /&gt;Then I consider infiniband the same as interconnect in Tru64, I mean :&lt;BR /&gt;Each node or server connected to the interconnect network has only one interface.&lt;BR /&gt;If you want to have redundancy or if you want increasing performances, you must you a double rails configuration.&lt;BR /&gt;Then, two interfaces on each nodes/servers. One connected to the interconnect network A and the other connected to network B.&lt;BR /&gt;These two networks can be used at the same time for parallelization but the software have to deal with, not kernel or drivers.&lt;BR /&gt;&lt;BR /&gt;Try searching for solution based on double infiniband rails instead of bonding these two interfaces.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Lionel</description>
      <pubDate>Fri, 17 Mar 2006 03:53:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/bonding-infiniband-interfaces/m-p/3752401#M85999</guid>
      <dc:creator>Lionel Giraudeau</dc:creator>
      <dc:date>2006-03-17T03:53:42Z</dc:date>
    </item>
  </channel>
</rss>

