HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Oracle RAC, RHEL4 and NIC bonding
Operating System - Linux
1830071
Members
14190
Online
109998
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2008 07:24 AM
04-16-2008 07:24 AM
Oracle RAC, RHEL4 and NIC bonding
Description of problem: I have 2 RHEL4 systems running oracle RAC with 4 NICs each. The NICs are configured into 2 bonds on each system. bond0 is made up of (eth0 & eth2).
bond1 is made up of (eth1 & eth3). bond0 is a public ip address. bond1 is a
private ip address that serves as an interconnect for an Oracle RAC cluster.
If I ping -I eth1 "interconnect address", I get "Destination Host Unreachable"
messages. If I then ping -I eth3 "interconnect address", I get "Destination
Host Unreachable" messages, followed by the public network connections
dropping. I was able to ssh from the 2nd system back into the "down" system
across the private interconnect. However response is extremely slow or non-
responsive to many commands. I had to perform a power down to restart the
system.
Additional info:
# cat /etc/modprobe.conf
alias eth0 tg3
alias eth1 tg3
alias eth3 e1000
alias eth2 e1000
alias bond0 bonding
options bond0 miimon=100 mode=5 max_bonds=2
alias bond1 bonding
options bond1 miimon=100 mode=5
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
IPADDR=xxx.xxx.xxx.10
NETMASK=255.255.255.224
ONBOOT=yes
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
IPADDR=192.168.234.10
NETMASK=255.255.255.224
ONBOOT=yes
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation|NetXtreme BCM5703 Gigabit Ethernet
DEVICE=eth0
BOOTPROTO=none
HWADDR=00:02:A5:4E:04:7E
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Broadcom Corporation|NetXtreme BCM5703 Gigabit Ethernet
DEVICE=eth1
BOOTPROTO=none
HWADDR=00:02:A5:4E:04:7F
MASTER=bond1
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
# Intel Corporation 82546EB Gigabit Ethernet Controller (Copper)
DEVICE=eth2
BOOTPROTO=none
HWADDR=00:0E:7F:F1:3E:0D
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Intel Corporation 82546EB Gigabit Ethernet Controller (Copper)
DEVICE=eth3
BOOTPROTO=none
HWADDR=00:0E:7F:F1:3E:0C
MASTER=bond1
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
We changed the modprobe.conf on each server from "mode=5" to "mode=6".
Repeated the above test. The interconnect is still unreachable, but the connectivity issue seems resolved.
Now for the long awaited question... What is the difference in mode 5 and mode 6 that would account for this? I've read several posts and the RH docs on bonding, but I don't understand why that would resolve the "crash problem".
The Interconnect is still not reachable, but it seems to us that it should be. Which is the second question.
Appreciate any help...
Thanks,
bond1 is made up of (eth1 & eth3). bond0 is a public ip address. bond1 is a
private ip address that serves as an interconnect for an Oracle RAC cluster.
If I ping -I eth1 "interconnect address", I get "Destination Host Unreachable"
messages. If I then ping -I eth3 "interconnect address", I get "Destination
Host Unreachable" messages, followed by the public network connections
dropping. I was able to ssh from the 2nd system back into the "down" system
across the private interconnect. However response is extremely slow or non-
responsive to many commands. I had to perform a power down to restart the
system.
Additional info:
# cat /etc/modprobe.conf
alias eth0 tg3
alias eth1 tg3
alias eth3 e1000
alias eth2 e1000
alias bond0 bonding
options bond0 miimon=100 mode=5 max_bonds=2
alias bond1 bonding
options bond1 miimon=100 mode=5
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
IPADDR=xxx.xxx.xxx.10
NETMASK=255.255.255.224
ONBOOT=yes
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
IPADDR=192.168.234.10
NETMASK=255.255.255.224
ONBOOT=yes
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation|NetXtreme BCM5703 Gigabit Ethernet
DEVICE=eth0
BOOTPROTO=none
HWADDR=00:02:A5:4E:04:7E
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
# Broadcom Corporation|NetXtreme BCM5703 Gigabit Ethernet
DEVICE=eth1
BOOTPROTO=none
HWADDR=00:02:A5:4E:04:7F
MASTER=bond1
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
# Intel Corporation 82546EB Gigabit Ethernet Controller (Copper)
DEVICE=eth2
BOOTPROTO=none
HWADDR=00:0E:7F:F1:3E:0D
MASTER=bond0
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
[root@rac1]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Intel Corporation 82546EB Gigabit Ethernet Controller (Copper)
DEVICE=eth3
BOOTPROTO=none
HWADDR=00:0E:7F:F1:3E:0C
MASTER=bond1
ONBOOT=yes
SLAVE=yes
TYPE=Ethernet
We changed the modprobe.conf on each server from "mode=5" to "mode=6".
Repeated the above test. The interconnect is still unreachable, but the connectivity issue seems resolved.
Now for the long awaited question... What is the difference in mode 5 and mode 6 that would account for this? I've read several posts and the RH docs on bonding, but I don't understand why that would resolve the "crash problem".
The Interconnect is still not reachable, but it seems to us that it should be. Which is the second question.
Appreciate any help...
Thanks,
"I have not failed. I've just found 10,000 ways that won't work." - Thomas Edison
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2008 09:05 AM
04-16-2008 09:05 AM
Re: Oracle RAC, RHEL4 and NIC bonding
Shalom,
Probably the gateway is incorrect on the malfunctioning interface.
Thats controlled on the ifcfg-bond# file and you need to make sure there is no gateway programming in /etc/sysconfig/network for example.
SEP
Probably the gateway is incorrect on the malfunctioning interface.
Thats controlled on the ifcfg-bond# file and you need to make sure there is no gateway programming in /etc/sysconfig/network for example.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2008 10:01 AM
04-16-2008 10:01 AM
Re: Oracle RAC, RHEL4 and NIC bonding
Hi, Steven...
/etc/sysconfig/network:
NETWORKING=yes
HOSTNAME=rac1
GATEWAY=123.456.789.010
So, I should delete the "GATEWAY" entry on each server?
Thanks,
:-)
/etc/sysconfig/network:
NETWORKING=yes
HOSTNAME=rac1
GATEWAY=123.456.789.010
So, I should delete the "GATEWAY" entry on each server?
Thanks,
:-)
"I have not failed. I've just found 10,000 ways that won't work." - Thomas Edison
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-16-2008 01:16 PM
04-16-2008 01:16 PM
Re: Oracle RAC, RHEL4 and NIC bonding
what output provides route -n
and what output provides a ping 192.168.234.10
?
and what output provides a ping 192.168.234.10
?
21 is only the half truth
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP