- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- network bonding -- looking for optimal performance
Operating System - Linux
1821591
Members
3350
Online
109633
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2010 09:18 AM
тАО02-11-2010 09:18 AM
Good morning all;
We are running a application that relies on good network and nfs performance (Oracle EBS 11.5.10). Currently I have two of the four network cards bonded (bond 0)togther. I would like to bond two additional network cards to the two I have already bonded. The question I have is this a good way to go? Should I do something different? Should I change my existing bond mode from 0 to ?
Thank you for your input..
THank you for you help, greatly appreciate it.
We are running a application that relies on good network and nfs performance (Oracle EBS 11.5.10). Currently I have two of the four network cards bonded (bond 0)togther. I would like to bond two additional network cards to the two I have already bonded. The question I have is this a good way to go? Should I do something different? Should I change my existing bond mode from 0 to ?
Thank you for your input..
THank you for you help, greatly appreciate it.
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2010 04:49 PM
тАО02-11-2010 04:49 PM
Solution
Personally, I am not all that fond of mode 0 (round robin). Yes, it will allow a single "flow" (eg TCP connection) to make use of more than one link in the bond, but it also means that traffic on that flow will be reordered.
Indeed, TCP will "deal" with that - every out of order segment will result in an immediate ACK rather than waiting to "ack every other." This will increase CPU utilization on both sides.
If there are enough of these out of order TCP segments, it can trigger a spurrious "fast retransmit" - with only two links in the bond that is unlikely, it does become more likely with four links in the bond - it takes three "duplicate ACKs" to trigger a fast retransmit.
If you have a situation where you need a single stream/flow/connection to go faster than a single GbE link, as unpleasant as the prices might be, I would suggest a 10G link.
If you have many TCP connections, you might consider one of the other bonding modes. Modulo the mode you pick, how things are distributed on *inbound* will be up to the switch - which may also then be the case for your NFS server - how inbound traffic gets spread across its links in its bond can depend on the settings in the switch.
Indeed, TCP will "deal" with that - every out of order segment will result in an immediate ACK rather than waiting to "ack every other." This will increase CPU utilization on both sides.
If there are enough of these out of order TCP segments, it can trigger a spurrious "fast retransmit" - with only two links in the bond that is unlikely, it does become more likely with four links in the bond - it takes three "duplicate ACKs" to trigger a fast retransmit.
If you have a situation where you need a single stream/flow/connection to go faster than a single GbE link, as unpleasant as the prices might be, I would suggest a 10G link.
If you have many TCP connections, you might consider one of the other bonding modes. Modulo the mode you pick, how things are distributed on *inbound* will be up to the switch - which may also then be the case for your NFS server - how inbound traffic gets spread across its links in its bond can depend on the settings in the switch.
there is no rest for the wicked yet the virtuous have no pillows
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-15-2010 01:41 AM
тАО02-15-2010 01:41 AM
Re: network bonding -- looking for optimal performance
Are the existing channels saturated enough? If not, you could have more options in terms using the new NICs as backup. I usually go with mode 6 (adaptive load balancing, does not require switch support), however I've only used 2 NICs max, so a 4 NIC setup may need something different. Anyway, http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Bonding_Driver_Options has a detailed intel on bonding. HTH.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP