- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- incompatibility between named dns and bonding
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2005 07:01 PM
09-12-2005 07:01 PM
As a reference, kernel is 2.6.5-7.151-smp, in log there are no entries related to the problem, hp support pack is not installed because it did work with just suse updates and now I am a bit worried to install new drivers on a production server.
Thank you
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2005 08:13 PM
09-12-2005 08:13 PM
Re: incompatibility between named dns and bonding
IMHO, the problem isn't bind + bonding but bonding itself.
Which NICs/driver do you use?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2005 09:18 PM
09-12-2005 09:18 PM
Re: incompatibility between named dns and bonding
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2005 09:50 PM
09-12-2005 09:50 PM
Re: incompatibility between named dns and bonding
I agree with Vitaly Karasik on this one. I have built several servers with bonded NIC cards in the US and none of them have exibited this behavior.
The network connectivity problem may be from HOW you did the bonding.
I'd like to see the (if need be altered) output from
ifconfig bond0
Also ethtool and other utility output may explain the problem.
There may be a problem in the network config files of the two eth cards, or the ifcfg-bond0 config file. Also, there is a needed entry to make this work right that I insert into the /etc/init.d/network script.
Please post the procedure or cookbook that you used.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2005 10:05 PM
09-12-2005 10:05 PM
Re: incompatibility between named dns and bonding
Anyway in /etc/sysconfig/network I have this script for bond (when active):
BOOTPROTO='static'
BROADCAST='159.213.51.255'
IPADDR='159.213.51.131'
NETMASK='255.255.255.0'
MTU=''
REMOTE_IPADDR=''
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='miimon=100 mode=active-backup'
BONDING_SLAVE0='eth0'
BONDING_SLAVE1='eth1'
and for eth0:
BOOTPROTO='none'
STARTMODE='off'
UNIQUE='GA8e.dR48ZsvS6aD'
_nm_name='bus-pci-0000:02:01.0'
and for eth1:
BOOTPROTO='none'
STARTMODE='off'
UNIQUE='LHB6.dR48ZsvS6aD'
_nm_name='bus-pci-0000:02:02.0'
ifconfig now give:
ulisse:~ # ifconfig
eth0 Link encap:Ethernet HWaddr 00:0D:9D:4D:F4:37
inet addr:159.213.51.131 Bcast:159.213.51.255 Mask:255.255.255.0
inet6 addr: fe80::20d:9dff:fe4d:f437/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:87728714 errors:0 dropped:0 overruns:0 frame:0
TX packets:95781093 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3402984768 (3245.3 Mb) TX bytes:1846716390 (1761.1 Mb)
Interrupt:29
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:22666404 errors:0 dropped:0 overruns:0 frame:0
TX packets:22666404 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2995445058 (2856.6 Mb) TX bytes:2995445058 (2856.6 Mb)
As an example I have another proliant (dl580) with similar configuration. The out of ifconfig is:
dl580n1:~ # ifconfig
bond0 Link encap:Ethernet HWaddr 00:02:A5:4F:5E:32
inet addr:172.18.10.25 Bcast:172.18.10.255 Mask:255.255.255.0
inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:291918978 errors:0 dropped:0 overruns:0 frame:0
TX packets:259961188 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:792101271 (755.4 Mb) TX bytes:3625147679 (3457.2 Mb)
eth0 Link encap:Ethernet HWaddr 00:02:A5:4F:5E:32
inet addr:172.18.10.25 Bcast:172.18.10.255 Mask:255.255.255.0
inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:284130508 errors:0 dropped:0 overruns:0 frame:0
TX packets:259961185 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:117464164 (112.0 Mb) TX bytes:3625147457 (3457.2 Mb)
Base address:0x5000 Memory:f7fe0000-f8000000
eth1 Link encap:Ethernet HWaddr 00:02:A5:4F:5E:32
inet addr:172.18.10.25 Bcast:172.18.10.255 Mask:255.255.255.0
inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link
UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
RX packets:7788470 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:674637107 (643.3 Mb) TX bytes:222 (222.0 b)
Base address:0x5040 Memory:f7f60000-f7f80000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:328106 errors:0 dropped:0 overruns:0 frame:0
TX packets:328106 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:69468287 (66.2 Mb) TX bytes:69468287 (66.2 Mb)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2005 12:59 AM
09-13-2005 12:59 AM
SolutionOther services works fine?
If you run the named server in foreground, do you have any output?
Try configuring the options:
options {
listen-on { bond_ip_address; };
interface-interval 0;
};
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2005 01:14 AM
09-13-2005 01:14 AM
Re: incompatibility between named dns and bonding
I didn't try to run the daemon in foreground, neither with a debug level. Tomorrow I'll made this test.
Please can you specify where to write or use such configuring options?
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2005 03:28 PM
09-13-2005 03:28 PM
Re: incompatibility between named dns and bonding
please don't take offense, but... does Your switch know about the bonding configuration??? It appears not!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2005 11:47 PM
09-13-2005 11:47 PM
Re: incompatibility between named dns and bonding
1 - the two nics on the server are attached to two different L3 switches with ospf and other sw detecting redundant paths
2 - bonding is ok without named daemon
3 - another server with the same bonding attached to the same switches is working ok
4 - bonding is working in active-backup, so it keeps one nic active and one nic inactive; when the link on the working nic goes down it make at work the other nic.
does it make sense?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2005 12:40 AM
09-14-2005 12:40 AM
Re: incompatibility between named dns and bonding
talk to the net admin, even with one nic in fallback mode in bonding config the second nic might be visible if the switch doesn't know about the setup that's been done.
I don't have too much practice with this - but I think splitting a host over switches needs either switch-crossing trunk ability (large switches have that) or turning to STP.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2005 02:57 AM
09-14-2005 02:57 AM
Re: incompatibility between named dns and bonding
I think that the switch learns the MAC when an IP is active on the port.
BTW, the options you should configure in the /etc/named.conf file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2005 10:38 PM
09-14-2005 10:38 PM
Re: incompatibility between named dns and bonding
Thank you for now
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-18-2005 10:22 PM
09-18-2005 10:22 PM
Re: incompatibility between named dns and bonding
Thank you for sharps suggestions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-18-2005 10:25 PM
09-18-2005 10:25 PM