HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Network Load Balancing between 2 nic card
Operating System - Linux
1829049
Members
2925
Online
109986
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2003 01:12 AM
01-16-2003 01:12 AM
Network Load Balancing between 2 nic card
I have two nic card on an DL380 Proliant server. Is it possible to have network load balancing on this two nic cards under redhat linux 8.0??
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2003 02:41 AM
01-16-2003 02:41 AM
Re: Network Load Balancing between 2 nic card
are you planning to configure both the network cards in the same network.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2003 03:48 AM
01-16-2003 03:48 AM
Re: Network Load Balancing between 2 nic card
Hi,
ref:
Background on Channel Bonding is available from
http://www.beowulf.org/software/software.html
Requirements:
TWO Ethernet NICs per system. TWO hubs (one for each channel) OR two switches (one for each channel) OR a switch that can be segmented into virtual LANS
Steps (for kernel 2.0.36): 1. Download and build the ifenslave.c program: (http://beowulf.gsfc.nasa.gov/software/bonding.html) Comment out line 35 "#include " and compile using "gcc -Wall -Wstrict-prototypes -O ifenslave.c -o ifenslave"
2. Apply the kernel patch (get linux-2.0.36-channel-bonding.patch from ftp ftp.plogic.com), run xconfig and enable Beowulf Channel bonding
3. Rebuild and install the kennel.
Each channel must be on a separate switch or hub (or a segmented switch). There is no need to assign an IP number to the second interface, although using it as separate network (without channel bonding) may have advantages for some applications.
To channel bond, login to each system as root and issue the following command on each system:
./ifenslave -v eth0 eth1
This will bond eth1 to eth0. This assumes:
eth0 is already configured and used as your cluster network. eth1 is the second Ethernet card detected by the OS at boot.
You can do this from the host node by enslaving all nodes BEFORE the host (Order is important. Node 2 must enslaved before the host - node 1). For each node, do steps a,b,c.
a. open a window b. login to node 2 c. enter (as root) the above command. d. in a separate window enter (as root) the above command for node 1.
Your cluster should now be "channel bonded". You can test this by running netperf or a similar benchmark.
Channel bonding shutdown is not as simple. We are investigating this and will provide command line tools that execute channel bonding set-up and tear down automatically. In the mean time, the safest way to restore single channel performance is to either reboot each system or use the network manager (part of the control panel) to shutdown and restart each interface.
REMEMBER: communication between a channel bonded node and a none channel bonded node is very slow if not impossible. Therefore the whole cluster must be channel bonded.
regards,
U.SivaKumar
ref:
Background on Channel Bonding is available from
http://www.beowulf.org/software/software.html
Requirements:
TWO Ethernet NICs per system. TWO hubs (one for each channel) OR two switches (one for each channel) OR a switch that can be segmented into virtual LANS
Steps (for kernel 2.0.36): 1. Download and build the ifenslave.c program: (http://beowulf.gsfc.nasa.gov/software/bonding.html) Comment out line 35 "#include " and compile using "gcc -Wall -Wstrict-prototypes -O ifenslave.c -o ifenslave"
2. Apply the kernel patch (get linux-2.0.36-channel-bonding.patch from ftp ftp.plogic.com), run xconfig and enable Beowulf Channel bonding
3. Rebuild and install the kennel.
Each channel must be on a separate switch or hub (or a segmented switch). There is no need to assign an IP number to the second interface, although using it as separate network (without channel bonding) may have advantages for some applications.
To channel bond, login to each system as root and issue the following command on each system:
./ifenslave -v eth0 eth1
This will bond eth1 to eth0. This assumes:
eth0 is already configured and used as your cluster network. eth1 is the second Ethernet card detected by the OS at boot.
You can do this from the host node by enslaving all nodes BEFORE the host (Order is important. Node 2 must enslaved before the host - node 1). For each node, do steps a,b,c.
a. open a window b. login to node 2 c. enter (as root) the above command. d. in a separate window enter (as root) the above command for node 1.
Your cluster should now be "channel bonded". You can test this by running netperf or a similar benchmark.
Channel bonding shutdown is not as simple. We are investigating this and will provide command line tools that execute channel bonding set-up and tear down automatically. In the mean time, the safest way to restore single channel performance is to either reboot each system or use the network manager (part of the control panel) to shutdown and restart each interface.
REMEMBER: communication between a channel bonded node and a none channel bonded node is very slow if not impossible. Therefore the whole cluster must be channel bonded.
regards,
U.SivaKumar
Innovations are made when conventions are broken
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2003 04:06 AM
01-16-2003 04:06 AM
Re: Network Load Balancing between 2 nic card
Both on the same subnet. What I would like to do is to ensure there is no network down if either one of the network card failure.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP