<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic netperf - close to 100% packet drop using UDP in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234829#M81785</link>
    <description>When I run netperf with UDP_STREAM, I am getting almost 100% as shown below:&lt;BR /&gt;netperf -t UDP_STREAM -H &lt;IP&gt; -l 20 -- -s 128K -S 128K -m 32K -M 32K&lt;BR /&gt;&lt;BR /&gt;Socket  Message  Elapsed      Messages&lt;BR /&gt;Size    Size     Time         Okay Errors   Throughput&lt;BR /&gt;bytes   bytes    secs            #      #   10^6bits/sec&lt;BR /&gt;&lt;BR /&gt;262142   32768   20.00       72916      0     955.63&lt;BR /&gt;262142           20.00           4              0.05&lt;BR /&gt;&lt;BR /&gt;This is between 2 linux boxes in the same subnet.&lt;BR /&gt;&lt;/IP&gt;</description>
    <pubDate>Wed, 16 Jul 2008 10:45:03 GMT</pubDate>
    <dc:creator>Mahesh Acharya</dc:creator>
    <dc:date>2008-07-16T10:45:03Z</dc:date>
    <item>
      <title>netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234829#M81785</link>
      <description>When I run netperf with UDP_STREAM, I am getting almost 100% as shown below:&lt;BR /&gt;netperf -t UDP_STREAM -H &lt;IP&gt; -l 20 -- -s 128K -S 128K -m 32K -M 32K&lt;BR /&gt;&lt;BR /&gt;Socket  Message  Elapsed      Messages&lt;BR /&gt;Size    Size     Time         Okay Errors   Throughput&lt;BR /&gt;bytes   bytes    secs            #      #   10^6bits/sec&lt;BR /&gt;&lt;BR /&gt;262142   32768   20.00       72916      0     955.63&lt;BR /&gt;262142           20.00           4              0.05&lt;BR /&gt;&lt;BR /&gt;This is between 2 linux boxes in the same subnet.&lt;BR /&gt;&lt;/IP&gt;</description>
      <pubDate>Wed, 16 Jul 2008 10:45:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234829#M81785</guid>
      <dc:creator>Mahesh Acharya</dc:creator>
      <dc:date>2008-07-16T10:45:03Z</dc:date>
    </item>
    <item>
      <title>Re: netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234830#M81786</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Two Linux boxes, naturally an HP-UX question.&lt;BR /&gt;&lt;BR /&gt;Possible causes:&lt;BR /&gt;1) network congestion. Collisions on the lan. Talk to the networking people.&lt;BR /&gt;2) Bad application. Talk to the people that wrote the application.&lt;BR /&gt;3) Boxes are overloaded.&lt;BR /&gt;4) Duplicate IP's on the network, (probably not connectivity would stop completely).&lt;BR /&gt;5) Network card is going bad.&lt;BR /&gt;6) Problem with port settings or physical network infrastructure.&lt;BR /&gt;&lt;BR /&gt;Suggestions:&lt;BR /&gt;1) Try using tcpdump or wireshark for some packet analysis.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Wed, 16 Jul 2008 11:35:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234830#M81786</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2008-07-16T11:35:04Z</dc:date>
    </item>
    <item>
      <title>Re: netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234831#M81787</link>
      <description>But the similar tests with TCP_STREAM shows a throughput of 93 Mb/sec&lt;BR /&gt;netperf -t TCP_STREAM -H 16.138.181.45 -l 20 -- -s 256K -S 256K -m 128K -M 128K&lt;BR /&gt;&lt;BR /&gt;Recv   Send    Send&lt;BR /&gt;Socket Socket  Message  Elapsed&lt;BR /&gt;Size   Size    Size     Time     Throughput&lt;BR /&gt;bytes  bytes   bytes    secs.    10^6bits/sec&lt;BR /&gt;&lt;BR /&gt;262142 262142 131072    20.02      94.13&lt;BR /&gt;</description>
      <pubDate>Wed, 16 Jul 2008 11:37:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234831#M81787</guid>
      <dc:creator>Mahesh Acharya</dc:creator>
      <dc:date>2008-07-16T11:37:19Z</dc:date>
    </item>
    <item>
      <title>Re: netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234832#M81788</link>
      <description>I am the people who wrote the application, and I can attest that while it may occasionally be nasty to a network, it isn't bad.  At least not for a benchmark :)&lt;BR /&gt;&lt;BR /&gt;Netperf establishes a TCP "control" connection in addition to whatever data "connection" (in this case UDP endpoints) is used.  If there were duplicate IP's likely as not it would have affected the establishment of the control connection and there would have been no results reported at all.&lt;BR /&gt;&lt;BR /&gt;Box overload at least at the level of CPU util can be checked by adding -c and -C options to the first part of the command line and report CPU utilization.&lt;BR /&gt;&lt;BR /&gt;My first guess is that if Mahesh were to check the netstat stats for UDP on the recieving side he would see lots of UDP errors.  That "4" in last line of the UDP_STREAM output suggests that there were four receives and no others.  &lt;BR /&gt;&lt;BR /&gt;Linux has intra-stack flow control for UDP, so the sending side reporting 955 Mbit/s implies the sender was a gigabit link.  That the subsequent TCP_STREAM test only shows 94 Mbit/s implies that the receiver, or something between the sender and the receiver, is only 100BT.  That would lead to the second guess, that checking switch stats on the switch between the two machines will show a lot of dropped traffic there.&lt;BR /&gt;&lt;BR /&gt;And since the sends were 32768 bytes each, the IP datagrams carrying the UDP datagrams will be fragmented, and I'll wager that many of those fragments were lost at that 1G to 100BT point, which would lead to lots of IP fragmentation reassembly failures at the receiver, which I think can be checked by looking at IP statistics with netstat on the receiver.&lt;BR /&gt;&lt;BR /&gt;Having been thinking as I type, if it is indeed the IP fragmentation business then it is rather less likely that the receiver UDP stats will show errors.&lt;BR /&gt;&lt;BR /&gt;One other test to try would be a UDP_STREAM test with a send size of say 1024 bytes or 1472 bytes to avoid IP fragmentation.  Then if it is the speed mismatch leading to issues with fragmentation, you will probably see the sender still sending near a Gbit/s but the receiver actually receiving at near 100Mbit/s.</description>
      <pubDate>Thu, 17 Jul 2008 16:15:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234832#M81788</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-07-17T16:15:31Z</dc:date>
    </item>
    <item>
      <title>Re: netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234833#M81789</link>
      <description>Thank you Rick for the detailed analysis.&lt;BR /&gt;&lt;BR /&gt;I was trying to find the throughput in case of VM and NonVM systems. Below is the results of UDP_STREAM of VM and NonVM systems, both systems identical in terms of cpu, memory and swap space.&lt;BR /&gt;&lt;BR /&gt;Not sure why the service demand is very high in case of VM? At least from senders (first line) perspective, the service demand should have been the same in both cases.&lt;BR /&gt;For smaller messages sizes (less than 1472), service demand is 30+, but for messages more than 1472, its very high as shown below&lt;BR /&gt;&lt;BR /&gt;[root@RHEL4U5 vm]# netperf -t UDP_STREAM -H &lt;NONVM_IP&gt; -c -C -l 20 -- -m 8K&lt;BR /&gt;UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 16.138.181.41 (16.138.181.41) port 0 AF_INET : interval : demo&lt;BR /&gt;Socket  Message  Elapsed      Messages                   CPU      Service&lt;BR /&gt;Size    Size     Time         Okay Errors   Throughput   Util     Demand&lt;BR /&gt;bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB&lt;BR /&gt;&lt;BR /&gt;110592    8192   20.00      292533      0      958.4     7.44     2.546&lt;BR /&gt;109568           20.00      292465             958.2     24.26    2.074&lt;BR /&gt;&lt;BR /&gt;[root@RHEL4U5 vm]# netperf -t UDP_STREAM -H &lt;VM_IP&gt; -c -C -l 20 -- -m 8K&lt;BR /&gt;UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 16.138.181.45 (16.138.181.45) port 0 AF_INET : interval : demo&lt;BR /&gt;Socket  Message  Elapsed      Messages                   CPU      Service&lt;BR /&gt;Size    Size     Time         Okay Errors   Throughput   Util     Demand&lt;BR /&gt;bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB&lt;BR /&gt;&lt;BR /&gt;110592    8192   20.00      292535      0      958.4     7.27     38271.934&lt;BR /&gt;109568           20.00          19               0.1     20.56    27053.768&lt;BR /&gt;&lt;/VM_IP&gt;&lt;/NONVM_IP&gt;</description>
      <pubDate>Fri, 18 Jul 2008 04:35:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234833#M81789</guid>
      <dc:creator>Mahesh Acharya</dc:creator>
      <dc:date>2008-07-18T04:35:32Z</dc:date>
    </item>
    <item>
      <title>Re: netperf - close to 100% packet drop using UDP</title>
      <link>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234834#M81790</link>
      <description>CPU utilization will be higher in a virtual machine guest than in a "bare iron" system because there is the overhead of the hypervisor.  Now, where things become "complicated" is in actually measuring CPU utilization in a guest.  What has been done elsewhere is run netperf in the guest, but look at the overall CPU util in the hypervisor.</description>
      <pubDate>Fri, 05 Dec 2008 01:36:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/netperf-close-to-100-packet-drop-using-udp/m-p/4234834#M81790</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2008-12-05T01:36:57Z</dc:date>
    </item>
  </channel>
</rss>

