- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- RX packets drop at bonding interface
Operating System - Linux
1819830
Members
2842
Online
109607
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2010 05:30 AM
тАО03-09-2010 05:30 AM
RX packets drop at bonding interface
Hi All,
we are using bonding (mode=1) which creates bond2 interface enslaving eth4/eth5 interfaces.
We have seen everytime packets are dropping on bond2 for few hours and when I ran selftest offline, its rsolve the problem i.e pkt seen on this interface. The command to selftest on eth4 and eth5 as below:
root@payload3:~# ethtool -t eth4
The test result is PASS
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 0
root@payload3:~# ethtool -t eth5
The test result is FAIL
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 1
The tcpdump data on bond2 is captured at the time issue happened, but no packet is captured at this point of time.
root@payload1:~# ifconfig тАУa
bond2 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING PROMISC MASTER MULTICAST MTU:1500 Metric:1
RX packets:11381494 errors:0 dropped:3877327 overruns:0 frame:0
TX packets:19144 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:943464682 (899.7 MiB) TX bytes:1302504 (1.2 MiB)
eth4 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
inet6 addr: fe80::280:42ff:fe1d:642c/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:11381494 errors:0 dropped:3877327 overruns:0 frame:0
TX packets:19144 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:943464682 (899.7 MiB) TX bytes:1302504 (1.2 MiB)
Base address:0x3000 Memory:d8400000-d8420000
eth5 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Base address:0x3040 Memory:d8420000-d8440000
root@payload1:~# cat /proc/net/bonding/bond2
Ethernet Channel Bonding Driver: v2.6.3 (June 8, 2005)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth4
MII Status: up
Link Failure Count: 4
Permanent HW addr: 00:80:42:1d:64:2c
Slave Interface: eth5
MII Status: down
Link Failure Count: 0
Permanent HW addr: 00:80:42:1d:64:2d
root@payload1:~# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0 1500 0 22431248 0 0 024034375 0 0 0 BMmRU
bond1 1500 0 20027975 0 0 015809489 0 0 0 BMmRU
bond2 1500 0 11381494 0 3877632 0 19144 0 0 0 BMPmRU
eth0 1500 0 22435724 0 0 024034360 0 0 0 BMRU
eth1 1500 0 20027969 0 0 015809488 0 0 0 BMRU
eth3 1500 0 0 0 0 0 0 0 0 0 BMU
eth4 1500 0 11381494 0 3877632 0 19144 0 0 0 BMsRU
eth5 1500 0 0 0 0 0 0 0 0 0 BMsU
eth0. 1500 0 18878772 0 0 020481895 0 0 0 BMsRU
eth0. 1500 0 3552476 0 0 0 3552480 0 0 0 BMsRU
eth1. 1500 0 16473477 0 0 012257008 0 0 0 BMsRU
eth1. 1500 0 3554498 0 0 0 3552481 0 0 0 BMsRU
lo 16436 0 11069821 0 0 011069821 0 0 0 LRU
We would like to know what is wrong with the interface as why it is not getting any packets and how the running of this command resolved the issue for us everytime.
Thanks,
MKS
we are using bonding (mode=1) which creates bond2 interface enslaving eth4/eth5 interfaces.
We have seen everytime packets are dropping on bond2 for few hours and when I ran selftest offline, its rsolve the problem i.e pkt seen on this interface. The command to selftest on eth4 and eth5 as below:
root@payload3:~# ethtool -t eth4
The test result is PASS
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 0
root@payload3:~# ethtool -t eth5
The test result is FAIL
The test extra info:
Register test (offline) 0
Eeprom test (offline) 0
Interrupt test (offline) 0
Loopback test (offline) 0
Link test (on/offline) 1
The tcpdump data on bond2 is captured at the time issue happened, but no packet is captured at this point of time.
root@payload1:~# ifconfig тАУa
bond2 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING PROMISC MASTER MULTICAST MTU:1500 Metric:1
RX packets:11381494 errors:0 dropped:3877327 overruns:0 frame:0
TX packets:19144 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:943464682 (899.7 MiB) TX bytes:1302504 (1.2 MiB)
eth4 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
inet6 addr: fe80::280:42ff:fe1d:642c/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:11381494 errors:0 dropped:3877327 overruns:0 frame:0
TX packets:19144 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:943464682 (899.7 MiB) TX bytes:1302504 (1.2 MiB)
Base address:0x3000 Memory:d8400000-d8420000
eth5 Link encap:Ethernet HWaddr 00:80:42:1D:64:2C
UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Base address:0x3040 Memory:d8420000-d8440000
root@payload1:~# cat /proc/net/bonding/bond2
Ethernet Channel Bonding Driver: v2.6.3 (June 8, 2005)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth4
MII Status: up
Link Failure Count: 4
Permanent HW addr: 00:80:42:1d:64:2c
Slave Interface: eth5
MII Status: down
Link Failure Count: 0
Permanent HW addr: 00:80:42:1d:64:2d
root@payload1:~# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0 1500 0 22431248 0 0 024034375 0 0 0 BMmRU
bond1 1500 0 20027975 0 0 015809489 0 0 0 BMmRU
bond2 1500 0 11381494 0 3877632 0 19144 0 0 0 BMPmRU
eth0 1500 0 22435724 0 0 024034360 0 0 0 BMRU
eth1 1500 0 20027969 0 0 015809488 0 0 0 BMRU
eth3 1500 0 0 0 0 0 0 0 0 0 BMU
eth4 1500 0 11381494 0 3877632 0 19144 0 0 0 BMsRU
eth5 1500 0 0 0 0 0 0 0 0 0 BMsU
eth0. 1500 0 18878772 0 0 020481895 0 0 0 BMsRU
eth0. 1500 0 3552476 0 0 0 3552480 0 0 0 BMsRU
eth1. 1500 0 16473477 0 0 012257008 0 0 0 BMsRU
eth1. 1500 0 3554498 0 0 0 3552481 0 0 0 BMsRU
lo 16436 0 11069821 0 0 011069821 0 0 0 LRU
We would like to know what is wrong with the interface as why it is not getting any packets and how the running of this command resolved the issue for us everytime.
Thanks,
MKS
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2010 05:53 AM
тАО03-09-2010 05:53 AM
Re: RX packets drop at bonding interface
Shalom MKS,
Packets are going to drop once in a while whether the interface is bonded or not.
If it is significant, or performance is impacted, then look into the issue.
Same as with anything else:
1) Cable could be bad.
2) NIC port could be bad.
3) Switch may not handle the bonding well.
4) Switch might need a firmware or configuration update.
5) You may wish to try a different bonding mode in modprobe.conf
SEP
Packets are going to drop once in a while whether the interface is bonded or not.
If it is significant, or performance is impacted, then look into the issue.
Same as with anything else:
1) Cable could be bad.
2) NIC port could be bad.
3) Switch may not handle the bonding well.
4) Switch might need a firmware or configuration update.
5) You may wish to try a different bonding mode in modprobe.conf
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2010 10:16 PM
тАО03-09-2010 10:16 PM
Re: RX packets drop at bonding interface
Thanks SEP.
Why packets seen when we ran ethtool selftest in offline mode on eth4 during problem time? As per my understanding this will reset the interface at this particular point of time and porblem is going away. Isn't this ruled out any hardware problem in this interface?
Also, will this issue be cause if we use other than mode 1 bonding on far end?
Your comments is highly appriciate.
Thanks in advance.
MKS.
Why packets seen when we ran ethtool selftest in offline mode on eth4 during problem time? As per my understanding this will reset the interface at this particular point of time and porblem is going away. Isn't this ruled out any hardware problem in this interface?
Also, will this issue be cause if we use other than mode 1 bonding on far end?
Your comments is highly appriciate.
Thanks in advance.
MKS.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP