<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: incompatibility between named dns and bonding in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925489#M85201</link>
    <description>The discontinuity is at the network level and impact all services.&lt;BR /&gt;I didn't try to run the daemon in foreground, neither with a debug level. Tomorrow I'll made this test.&lt;BR /&gt;&lt;BR /&gt;Please can you specify where to write or use such configuring options?&lt;BR /&gt;Thank you</description>
    <pubDate>Tue, 13 Sep 2005 08:14:21 GMT</pubDate>
    <dc:creator>Daniele Bernazzi</dc:creator>
    <dc:date>2005-09-13T08:14:21Z</dc:date>
    <item>
      <title>incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925483#M85195</link>
      <description>Hi, I have a dl380 g3 with suse sles9 i386. In my configuration I teamed the two eths for redundancy, so I have a static ip configured on bond0 which enslaves eth0 and eth1. So far, so good: it works fine. Problems arises after the setup of the dns server (bind-9.2.3-76.14) service: the server has lots of net discontinuity. To overcome the problem I did have to disable bonding and configure just one eth. Now it works fine, but net redundancy is lost and my question is: how to have both features (bond and dns) working toghether?&lt;BR /&gt;As a reference, kernel is 2.6.5-7.151-smp, in log there are no entries related to the problem, hp support pack is not installed because it did work with just suse updates and now I am a bit worried to install new drivers on a production server.&lt;BR /&gt;Thank you&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Sep 2005 02:01:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925483#M85195</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-13T02:01:17Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925484#M85196</link>
      <description>Did you try to stress network by other services (ftp, NFS and so on)?&lt;BR /&gt;IMHO, the problem isn't bind + bonding but bonding itself.&lt;BR /&gt;Which NICs/driver do you use?</description>
      <pubDate>Tue, 13 Sep 2005 03:13:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925484#M85196</guid>
      <dc:creator>Vitaly Karasik_1</dc:creator>
      <dc:date>2005-09-13T03:13:40Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925485#M85197</link>
      <description>On the server there are other net services (proxy and mail for about 1000 users, web), so the net is stressed. Bit rate is about 2Mb/s on business hours. The nics used are the broadcom gigabit nc7781 embedded on the main board</description>
      <pubDate>Tue, 13 Sep 2005 04:18:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925485#M85197</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-13T04:18:11Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925486#M85198</link>
      <description>Properly bonded NIC cards should end up with one single IP address.&lt;BR /&gt;&lt;BR /&gt;I agree with Vitaly Karasik on this one. I have  built several servers with bonded NIC cards in the US and none of them have exibited this behavior.&lt;BR /&gt;&lt;BR /&gt;The network connectivity problem may be from HOW you did the bonding.&lt;BR /&gt;&lt;BR /&gt;I'd like to see the (if need be altered)  output from &lt;BR /&gt;&lt;BR /&gt;ifconfig bond0 &lt;BR /&gt;&lt;BR /&gt;Also ethtool and other utility output may explain the problem.&lt;BR /&gt;&lt;BR /&gt;There may be a problem in the network config files of the two eth cards, or the ifcfg-bond0 config file. Also, there is a needed entry to make this work right that I insert into the /etc/init.d/network script.&lt;BR /&gt;&lt;BR /&gt;Please post the procedure or cookbook that you used.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 13 Sep 2005 04:50:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925486#M85198</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2005-09-13T04:50:49Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925487#M85199</link>
      <description>now bond0 is down, so ifconfig does not work.&lt;BR /&gt;Anyway in /etc/sysconfig/network I have this script for bond (when active):&lt;BR /&gt;BOOTPROTO='static'&lt;BR /&gt;BROADCAST='159.213.51.255'&lt;BR /&gt;IPADDR='159.213.51.131'&lt;BR /&gt;NETMASK='255.255.255.0'&lt;BR /&gt;MTU=''&lt;BR /&gt;REMOTE_IPADDR=''&lt;BR /&gt;STARTMODE='onboot'&lt;BR /&gt;BONDING_MASTER='yes'&lt;BR /&gt;BONDING_MODULE_OPTS='miimon=100 mode=active-backup'&lt;BR /&gt;BONDING_SLAVE0='eth0'&lt;BR /&gt;BONDING_SLAVE1='eth1'&lt;BR /&gt;&lt;BR /&gt;and for eth0:&lt;BR /&gt;BOOTPROTO='none'&lt;BR /&gt;STARTMODE='off'&lt;BR /&gt;UNIQUE='GA8e.dR48ZsvS6aD'&lt;BR /&gt;_nm_name='bus-pci-0000:02:01.0'&lt;BR /&gt;&lt;BR /&gt;and for eth1:&lt;BR /&gt;BOOTPROTO='none'&lt;BR /&gt;STARTMODE='off'&lt;BR /&gt;UNIQUE='LHB6.dR48ZsvS6aD'&lt;BR /&gt;_nm_name='bus-pci-0000:02:02.0'&lt;BR /&gt;&lt;BR /&gt;ifconfig now give:&lt;BR /&gt;ulisse:~ # ifconfig&lt;BR /&gt;eth0      Link encap:Ethernet  HWaddr 00:0D:9D:4D:F4:37&lt;BR /&gt;          inet addr:159.213.51.131  Bcast:159.213.51.255  Mask:255.255.255.0&lt;BR /&gt;          inet6 addr: fe80::20d:9dff:fe4d:f437/64 Scope:Link&lt;BR /&gt;          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1&lt;BR /&gt;          RX packets:87728714 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:95781093 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:1000&lt;BR /&gt;          RX bytes:3402984768 (3245.3 Mb)  TX bytes:1846716390 (1761.1 Mb)&lt;BR /&gt;          Interrupt:29&lt;BR /&gt;&lt;BR /&gt;lo        Link encap:Local Loopback&lt;BR /&gt;          inet addr:127.0.0.1  Mask:255.0.0.0&lt;BR /&gt;          inet6 addr: ::1/128 Scope:Host&lt;BR /&gt;          UP LOOPBACK RUNNING  MTU:16436  Metric:1&lt;BR /&gt;          RX packets:22666404 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:22666404 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:0&lt;BR /&gt;          RX bytes:2995445058 (2856.6 Mb)  TX bytes:2995445058 (2856.6 Mb)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;As an example I have another proliant (dl580) with similar configuration. The out of ifconfig is:&lt;BR /&gt;dl580n1:~ # ifconfig&lt;BR /&gt;bond0     Link encap:Ethernet  HWaddr 00:02:A5:4F:5E:32&lt;BR /&gt;          inet addr:172.18.10.25  Bcast:172.18.10.255  Mask:255.255.255.0&lt;BR /&gt;          inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link&lt;BR /&gt;          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1&lt;BR /&gt;          RX packets:291918978 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:259961188 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:0&lt;BR /&gt;          RX bytes:792101271 (755.4 Mb)  TX bytes:3625147679 (3457.2 Mb)&lt;BR /&gt;&lt;BR /&gt;eth0      Link encap:Ethernet  HWaddr 00:02:A5:4F:5E:32&lt;BR /&gt;          inet addr:172.18.10.25  Bcast:172.18.10.255  Mask:255.255.255.0&lt;BR /&gt;          inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link&lt;BR /&gt;          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1&lt;BR /&gt;          RX packets:284130508 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:259961185 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:1000&lt;BR /&gt;          RX bytes:117464164 (112.0 Mb)  TX bytes:3625147457 (3457.2 Mb)&lt;BR /&gt;          Base address:0x5000 Memory:f7fe0000-f8000000&lt;BR /&gt;&lt;BR /&gt;eth1      Link encap:Ethernet  HWaddr 00:02:A5:4F:5E:32&lt;BR /&gt;          inet addr:172.18.10.25  Bcast:172.18.10.255  Mask:255.255.255.0&lt;BR /&gt;          inet6 addr: fe80::202:a5ff:fe4f:5e32/64 Scope:Link&lt;BR /&gt;          UP BROADCAST RUNNING NOARP SLAVE MULTICAST  MTU:1500  Metric:1&lt;BR /&gt;          RX packets:7788470 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:1000&lt;BR /&gt;          RX bytes:674637107 (643.3 Mb)  TX bytes:222 (222.0 b)&lt;BR /&gt;          Base address:0x5040 Memory:f7f60000-f7f80000&lt;BR /&gt;&lt;BR /&gt;lo        Link encap:Local Loopback&lt;BR /&gt;          inet addr:127.0.0.1  Mask:255.0.0.0&lt;BR /&gt;          inet6 addr: ::1/128 Scope:Host&lt;BR /&gt;          UP LOOPBACK RUNNING  MTU:16436  Metric:1&lt;BR /&gt;          RX packets:328106 errors:0 dropped:0 overruns:0 frame:0&lt;BR /&gt;          TX packets:328106 errors:0 dropped:0 overruns:0 carrier:0&lt;BR /&gt;          collisions:0 txqueuelen:0&lt;BR /&gt;          RX bytes:69468287 (66.2 Mb)  TX bytes:69468287 (66.2 Mb)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Sep 2005 05:05:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925487#M85199</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-13T05:05:38Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925488#M85200</link>
      <description>That discontinuity problem is only for the DNS service?&lt;BR /&gt;&lt;BR /&gt;Other services works fine?&lt;BR /&gt;&lt;BR /&gt;If you run the named server in foreground, do you have any output?&lt;BR /&gt;&lt;BR /&gt;Try configuring the options:&lt;BR /&gt;&lt;BR /&gt;options {&lt;BR /&gt;  listen-on { bond_ip_address; };&lt;BR /&gt;  interface-interval 0;&lt;BR /&gt;};</description>
      <pubDate>Tue, 13 Sep 2005 07:59:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925488#M85200</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-09-13T07:59:53Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925489#M85201</link>
      <description>The discontinuity is at the network level and impact all services.&lt;BR /&gt;I didn't try to run the daemon in foreground, neither with a debug level. Tomorrow I'll made this test.&lt;BR /&gt;&lt;BR /&gt;Please can you specify where to write or use such configuring options?&lt;BR /&gt;Thank you</description>
      <pubDate>Tue, 13 Sep 2005 08:14:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925489#M85201</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-13T08:14:21Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925490#M85202</link>
      <description>Daniele, &lt;BR /&gt;&lt;BR /&gt;please don't take offense, but... does Your switch know about the bonding configuration??? It appears not!&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Sep 2005 22:28:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925490#M85202</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-09-13T22:28:28Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925491#M85203</link>
      <description>I am not the net administrator, but I believe the problem is not there because:&lt;BR /&gt;1 - the two nics on the server are attached to two different L3 switches with ospf and other sw detecting redundant paths&lt;BR /&gt;2 - bonding is ok without named daemon&lt;BR /&gt;3 - another server with the same bonding attached to the same switches is working ok&lt;BR /&gt;4 - bonding is working in active-backup, so it keeps one nic active and one nic inactive; when the link on the working nic goes down it make at work the other nic.&lt;BR /&gt;&lt;BR /&gt;does it make sense?</description>
      <pubDate>Wed, 14 Sep 2005 06:47:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925491#M85203</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-14T06:47:51Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925492#M85204</link>
      <description>make's sense, won't work as far as I can tell...&lt;BR /&gt;&lt;BR /&gt;talk to the net admin, even with one nic in fallback mode in bonding config the second nic might be visible if the switch doesn't know about the setup that's been done.&lt;BR /&gt;&lt;BR /&gt;I don't have too much practice with this - but I think splitting a host over switches needs either switch-crossing trunk ability (large switches have that) or turning to STP.</description>
      <pubDate>Wed, 14 Sep 2005 07:40:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925492#M85204</guid>
      <dc:creator>Florian Heigl (new acc)</dc:creator>
      <dc:date>2005-09-14T07:40:27Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925493#M85205</link>
      <description>What Florian says make sense, but, will the switch know the MAC address if the bonding is in active/passive mode?&lt;BR /&gt;&lt;BR /&gt;I think that the switch learns the MAC when an IP is active on the port.&lt;BR /&gt;&lt;BR /&gt;BTW, the options you should configure in the /etc/named.conf file.</description>
      <pubDate>Wed, 14 Sep 2005 09:57:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925493#M85205</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-09-14T09:57:12Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925494#M85206</link>
      <description>I am now testing with the suggested options in named.conf. Till now looks good. I'll update some news after more test.&lt;BR /&gt;Thank you for now</description>
      <pubDate>Thu, 15 Sep 2005 05:38:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925494#M85206</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-15T05:38:11Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925495#M85207</link>
      <description>I believe the problem is solved, since from wednesday I had no troubles. I guess the solution was setting named daemon to not autodiscover network connections; may be his function to operate on all availables conns has problems with bond (in fact the command ifconfig reports bond0, but also eth0 and eth1 as operative and configurated).&lt;BR /&gt;Thank you for sharps suggestions</description>
      <pubDate>Mon, 19 Sep 2005 05:22:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925495#M85207</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-19T05:22:42Z</dc:date>
    </item>
    <item>
      <title>Re: incompatibility between named dns and bonding</title>
      <link>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925496#M85208</link>
      <description>as stated in my last reply</description>
      <pubDate>Mon, 19 Sep 2005 05:25:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/incompatibility-between-named-dns-and-bonding/m-p/4925496#M85208</guid>
      <dc:creator>Daniele Bernazzi</dc:creator>
      <dc:date>2005-09-19T05:25:34Z</dc:date>
    </item>
  </channel>
</rss>

