<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Dropped packets, Frame errors, rxbds_empty, rx_discards in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481033#M82279</link>
    <description>Details&lt;BR /&gt;&lt;BR /&gt;machine: HP PROLIANT SE1210&lt;BR /&gt;NIC: Tigon3 rev a200 PCI Express 10/100/1000Base-T Ethernet &lt;BR /&gt;driver: tg3&lt;BR /&gt;version: 3.93&lt;BR /&gt;firmware-version: 5722-v3.07, ASFIPMI v6.02&lt;BR /&gt;network config: 1000Mb/s full duplex, autoneg on&lt;BR /&gt;&lt;BR /&gt;Issue&lt;BR /&gt;&lt;BR /&gt;We are experiencing high TCP retransmit volume when received network traffic approaches 200Mb/s. When retransmits occur, the ifconfig drop and frame counters increment rapidly, as do the ethtool rxbds_emtpy and rx_discards statistics.&lt;BR /&gt;&lt;BR /&gt;We tried enabling hardware flow control using ethtool -a option, and see pause traffic, but the drops, frames, rxbds_empty and rx_discards continue.&lt;BR /&gt;&lt;BR /&gt;This condition is crippling throughput.&lt;BR /&gt;&lt;BR /&gt;Any ideas?</description>
    <pubDate>Mon, 17 Aug 2009 20:11:59 GMT</pubDate>
    <dc:creator>Jack Kidwell</dc:creator>
    <dc:date>2009-08-17T20:11:59Z</dc:date>
    <item>
      <title>Dropped packets, Frame errors, rxbds_empty, rx_discards</title>
      <link>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481033#M82279</link>
      <description>Details&lt;BR /&gt;&lt;BR /&gt;machine: HP PROLIANT SE1210&lt;BR /&gt;NIC: Tigon3 rev a200 PCI Express 10/100/1000Base-T Ethernet &lt;BR /&gt;driver: tg3&lt;BR /&gt;version: 3.93&lt;BR /&gt;firmware-version: 5722-v3.07, ASFIPMI v6.02&lt;BR /&gt;network config: 1000Mb/s full duplex, autoneg on&lt;BR /&gt;&lt;BR /&gt;Issue&lt;BR /&gt;&lt;BR /&gt;We are experiencing high TCP retransmit volume when received network traffic approaches 200Mb/s. When retransmits occur, the ifconfig drop and frame counters increment rapidly, as do the ethtool rxbds_emtpy and rx_discards statistics.&lt;BR /&gt;&lt;BR /&gt;We tried enabling hardware flow control using ethtool -a option, and see pause traffic, but the drops, frames, rxbds_empty and rx_discards continue.&lt;BR /&gt;&lt;BR /&gt;This condition is crippling throughput.&lt;BR /&gt;&lt;BR /&gt;Any ideas?</description>
      <pubDate>Mon, 17 Aug 2009 20:11:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481033#M82279</guid>
      <dc:creator>Jack Kidwell</dc:creator>
      <dc:date>2009-08-17T20:11:59Z</dc:date>
    </item>
    <item>
      <title>Re: Dropped packets, Frame errors, rxbds_empty, rx_discards</title>
      <link>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481034#M82280</link>
      <description>200Mb/s is pretty high traffic.  &lt;BR /&gt;I suggest you to check:&lt;BR /&gt;- how many packets/sec  you have. In fact, network throughput mostly  depends on  packets/sec, not bytes/sec&lt;BR /&gt;- what is your linux version/patchlevel?&lt;BR /&gt;- system utilization&lt;BR /&gt;&lt;BR /&gt;Vitaly</description>
      <pubDate>Tue, 18 Aug 2009 06:50:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481034#M82280</guid>
      <dc:creator>Vitaly Karasik_1</dc:creator>
      <dc:date>2009-08-18T06:50:37Z</dc:date>
    </item>
    <item>
      <title>Re: Dropped packets, Frame errors, rxbds_empty, rx_discards</title>
      <link>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481035#M82281</link>
      <description>When working, the box handles 70,000 packets/s with throughput of 151 Mb/s, and exhibits errors when packets/s peaks at 118,000 and throughput is 202 Mb/s.&lt;BR /&gt;&lt;BR /&gt;uname -a:&lt;BR /&gt;Linux hostname 2.6.18-128.1.16.el5 #1 SMP Tue Jun 30 06:07:26 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux&lt;BR /&gt;&lt;BR /&gt;/etc/redhat-release:&lt;BR /&gt;CentOS release 5.3 (Final)&lt;BR /&gt;&lt;BR /&gt;uptime:&lt;BR /&gt; 09:29:26 up 3 days, 19:21,  3 users,  load average: 5.18, 5.01, 5.04&lt;BR /&gt;&lt;BR /&gt;CPUs:&lt;BR /&gt;eight (8) Intel(R) Xeon(R) E5430  @ 2.66GHz&lt;BR /&gt;&lt;BR /&gt;MemTotal:&lt;BR /&gt;16,443,824 kB&lt;BR /&gt;&lt;BR /&gt;Seems like a capable machine. But the NIC is a choke point.&lt;BR /&gt;&lt;BR /&gt;Do you know what rxbds_empty is? Looking at the source code, it appears to be a circular list of block descriptors. Could this be a resource depletion issue?</description>
      <pubDate>Tue, 18 Aug 2009 13:19:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/dropped-packets-frame-errors-rxbds-empty-rx-discards/m-p/4481035#M82281</guid>
      <dc:creator>Jack Kidwell</dc:creator>
      <dc:date>2009-08-18T13:19:27Z</dc:date>
    </item>
  </channel>
</rss>

