- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Bonding infiniband interfaces
Operating System - Linux
1819966
Members
3705
Online
109607
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2006 10:09 PM
03-15-2006 10:09 PM
Bonding infiniband interfaces
Hi all
I am trying to bond two infiniband interfaces ib0 and ib1 but when I try to perform the bonding I get the following message:
ifup ibbond0
bonding device ibbond0 does not seem to be present, delaying initialization.
However when you do an lsmod you can see the module loaded into the kernel and is used to bond 2 ethernet cards
Or is it a case that this module can only be used to bond ethernet nics.
Kernel 2.6.9-22 using openinfiniband.org patches on RHEL 4 ES
I am trying to bond two infiniband interfaces ib0 and ib1 but when I try to perform the bonding I get the following message:
ifup ibbond0
bonding device ibbond0 does not seem to be present, delaying initialization.
However when you do an lsmod you can see the module loaded into the kernel and is used to bond 2 ethernet cards
Or is it a case that this module can only be used to bond ethernet nics.
Kernel 2.6.9-22 using openinfiniband.org patches on RHEL 4 ES
Arh well Thats how it is
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-16-2006 07:53 PM
03-16-2006 07:53 PM
Re: Bonding infiniband interfaces
Hi Greg,
In my opinion, bonding two or more infiniband is impossible because it has not be designed for.
Infiniband is mainly used in HPC (Clustering) 'cause it gaves low ping which are mandatory for good performances.
Then I consider infiniband the same as interconnect in Tru64, I mean :
Each node or server connected to the interconnect network has only one interface.
If you want to have redundancy or if you want increasing performances, you must you a double rails configuration.
Then, two interfaces on each nodes/servers. One connected to the interconnect network A and the other connected to network B.
These two networks can be used at the same time for parallelization but the software have to deal with, not kernel or drivers.
Try searching for solution based on double infiniband rails instead of bonding these two interfaces.
Regards,
Lionel
In my opinion, bonding two or more infiniband is impossible because it has not be designed for.
Infiniband is mainly used in HPC (Clustering) 'cause it gaves low ping which are mandatory for good performances.
Then I consider infiniband the same as interconnect in Tru64, I mean :
Each node or server connected to the interconnect network has only one interface.
If you want to have redundancy or if you want increasing performances, you must you a double rails configuration.
Then, two interfaces on each nodes/servers. One connected to the interconnect network A and the other connected to network B.
These two networks can be used at the same time for parallelization but the software have to deal with, not kernel or drivers.
Try searching for solution based on double infiniband rails instead of bonding these two interfaces.
Regards,
Lionel
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP