<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Network Performance Gigabit ethernet in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942964#M542741</link>
    <description>Thanks for the info Hein.&lt;BR /&gt;&lt;BR /&gt;Here's where I am at now. I did the ttcp test against the RP7400 and a windows box and the test reported way better numbers. Even better when I increased the length (following your suggestion) of the bufs written to the NIC.&lt;BR /&gt;&lt;BR /&gt;So it now seems that there's something that I may need to tune/investigate more on the SUN machine side.&lt;BR /&gt;&lt;BR /&gt;You're correct, this is a minimalistic test intended to isolate basic functionality. The final use of these Gigabit Cards will be NFS through a switch. For that purpose I may look into Jumbo Frames as you suggest.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards,&lt;BR /&gt;&lt;BR /&gt;Manuel</description>
    <pubDate>Mon, 12 Feb 2007 16:45:25 GMT</pubDate>
    <dc:creator>Manuel Urena</dc:creator>
    <dc:date>2007-02-12T16:45:25Z</dc:date>
    <item>
      <title>Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942959#M542736</link>
      <description>&lt;!--!*#--&gt;Hi all,&lt;BR /&gt;&lt;BR /&gt;I am seeing what I understand is degraded performance from 2 Gigabit Ethernet cards connected via a crossover cable. I am utilizing ttcp to test the cards.&lt;BR /&gt;&lt;BR /&gt;One Gigabit Card is connected to a Sun Box the other is connected to a RP7400.&lt;BR /&gt;&lt;BR /&gt;Below the ttcp output:&lt;BR /&gt;&lt;BR /&gt;SUN SERVER&lt;BR /&gt;==========&lt;BR /&gt;root# ~/UXTools/ttcp.sun4 -r -s -l1500&lt;BR /&gt;ttcp-r: nbuf=1024, buflen=1500, port=2000&lt;BR /&gt;ttcp-r: socket&lt;BR /&gt;ttcp-r: accept&lt;BR /&gt;ttcp-r: 0.1user 1.8sys 0:04real 43% 0i+0d 0maxrss 0+0pf 3050+3124csw&lt;BR /&gt;ttcp-r: 150000000 bytes processed&lt;BR /&gt;ttcp-r:      2.03 CPU sec  =   72159.8 KB/cpu sec,     577278 Kbits/cpu sec&lt;BR /&gt;ttcp-r:   4.66284 real sec =   31415.3 KB/real sec,    251322 Kbits/sec&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;RP7400&lt;BR /&gt;======&lt;BR /&gt;root# ./ttcp -t -s -l1500 -n100000 172.16.1.31&lt;BR /&gt;ttcp-t: nbuf=100000, buflen=1500, port=2000&lt;BR /&gt;ttcp-t: socket&lt;BR /&gt;ttcp-t: connect&lt;BR /&gt;ttcp-t: 0.1user 1.1sys 0:04real 27% 0i+51d 23maxrss 0+0pf 3555+305csw&lt;BR /&gt;ttcp-t: 150000000 bytes processed&lt;BR /&gt;ttcp-t:      1.28 CPU sec  =    114441 KB/cpu sec,     915527 Kbits/cpu sec&lt;BR /&gt;ttcp-t:   4.66284 real sec =   31415.3 KB/real sec,    251322 Kbits/sec&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;As you can see the througput is around 250Mbps/31MBps. I was expecting it to be at least 800 or 900 Mbps or at least 100MBps.&lt;BR /&gt;&lt;BR /&gt;Doing the same test utilizing fast ethernet interfaces yields  speeds around 11MBps which is close to the theoretical limit of 12.5MBps.&lt;BR /&gt;For gigabit ethernet the theoretical limit says 125MBps or 1000Mbps.&lt;BR /&gt;&lt;BR /&gt;Below some more output to provide additional information:&lt;BR /&gt;&lt;BR /&gt;RP7400&lt;BR /&gt;======&lt;BR /&gt;root# lanadmin -x 3&lt;BR /&gt;Speed = 1000 Full-Duplex.&lt;BR /&gt;Autonegotiation = On.&lt;BR /&gt;&lt;BR /&gt;                      LAN INTERFACE STATUS DISPLAY&lt;BR /&gt;                       Mon, Feb 12,2007  11:06:29&lt;BR /&gt;&lt;BR /&gt;PPA Number                      = 3&lt;BR /&gt;Description                     = lan3 HP PCI 1000Base-T Release B.11.11.24&lt;BR /&gt;Type (value)                    = ethernet-csmacd(6)&lt;BR /&gt;MTU Size                        = 1500&lt;BR /&gt;Speed                           = 1000000000&lt;BR /&gt;Station Address                 = 0x1321ea67ec&lt;BR /&gt;Administration Status (value)   = up(1)&lt;BR /&gt;Operation Status (value)        = up(1)&lt;BR /&gt;Last Change                     = 47462254&lt;BR /&gt;Inbound Octets                  = 2757941744&lt;BR /&gt;Inbound Unicast Packets         = 182741651&lt;BR /&gt;Inbound Non-Unicast Packets     = 34105&lt;BR /&gt;Inbound Discards                = 0&lt;BR /&gt;Inbound Errors                  = 0&lt;BR /&gt;Inbound Unknown Protocols       = 11&lt;BR /&gt;Outbound Octets                 = 3155951020&lt;BR /&gt;Outbound Unicast Packets        = 194521462&lt;BR /&gt;Outbound Non-Unicast Packets    = 169&lt;BR /&gt;Outbound Discards               = 0&lt;BR /&gt;Outbound Errors                 = 0&lt;BR /&gt;Outbound Queue Length           = 0&lt;BR /&gt;Specific                        = 655367&lt;BR /&gt;Index                           = 2&lt;BR /&gt;Alignment Errors                = 0&lt;BR /&gt;FCS Errors                      = 0&lt;BR /&gt;Single Collision Frames         = 0&lt;BR /&gt;Multiple Collision Frames       = 0&lt;BR /&gt;Deferred Transmissions          = 0&lt;BR /&gt;Late Collisions                 = 0&lt;BR /&gt;Excessive Collisions            = 0&lt;BR /&gt;Internal MAC Transmit Errors    = 0&lt;BR /&gt;Carrier Sense Errors            = 0&lt;BR /&gt;Frames Too Long                 = 0&lt;BR /&gt;Internal MAC Receive Errors     = 0&lt;BR /&gt;&lt;BR /&gt;SUN MACHINE&lt;BR /&gt;===========&lt;BR /&gt;root# kstat -p ce:0::'/collisions|framing|crc|code_violations|tx_late_collisions/'&lt;BR /&gt;&lt;BR /&gt;ce:0:ce0:code_violations        0&lt;BR /&gt;ce:0:ce0:collisions     0&lt;BR /&gt;ce:0:ce0:crc_err        0&lt;BR /&gt;ce:0:ce0:excessive_collisions   0&lt;BR /&gt;ce:0:ce0:late_collisions        0&lt;BR /&gt;&lt;BR /&gt;root# ndd /dev/ce adv_autoneg_cap&lt;BR /&gt;1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any help will be appreciated,&lt;BR /&gt;&lt;BR /&gt;Manuel&lt;BR /&gt;</description>
      <pubDate>Mon, 12 Feb 2007 12:14:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942959#M542736</guid>
      <dc:creator>Manuel Urena</dc:creator>
      <dc:date>2007-02-12T12:14:52Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942960#M542737</link>
      <description>First thing I'd be tempted to do is run a ttcp on the loopback interface on both machines, so you can judge the speed of the TCP/IP stack without actually touching the physical media.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Mon, 12 Feb 2007 12:27:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942960#M542737</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2007-02-12T12:27:46Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942961#M542738</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Once again we are amazed that real life doesn't measure up to marketing hype.&lt;BR /&gt;&lt;BR /&gt;This is to be expected. You may get better results for a cisco GB switch.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 12 Feb 2007 12:28:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942961#M542738</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-02-12T12:28:46Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942962#M542739</link>
      <description>Are you sure you know what you are measuring?&lt;BR /&gt;&lt;BR /&gt;- "-l 1500" seems exactly wront to me. A little to large to be small, and too small to be large. Would it not cause multiple packets per message. You are not using UDP (-u) so there will be some extra tcp overhead. Why not use the default 8192?&lt;BR /&gt;&lt;BR /&gt;- To see the real potential server-server performance of Gigabit should you not be using JUMBO frames? That reduces teh number of packets dramatically and with that the CPU time.&lt;BR /&gt;&lt;BR /&gt;- To some extend you are measuring CPU power, not network bandwith. The combined systems seem to be more than 50% cpu bound during that test. See Jumbo remark.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/783/jumbo_final.pdf" target="_blank"&gt;http://docs.hp.com/en/783/jumbo_final.pdf&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://www.cisco.com/warp/public/471/ttcp.html" target="_blank"&gt;http://www.cisco.com/warp/public/471/ttcp.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;Hein van den Heuvel&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 12 Feb 2007 13:58:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942962#M542739</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-02-12T13:58:53Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942963#M542740</link>
      <description>&amp;gt;&amp;gt; Are you sure you know what you are measuring? &lt;BR /&gt;&lt;BR /&gt;Ooops, that came out a little more crude than I intended.&lt;BR /&gt;&lt;BR /&gt;We often take shortcuts in our measurement, to make it easier, or because we _think_ we'll make it faster, and so on. But in doing so we may lose track of the what we really should be measuring.&lt;BR /&gt;&lt;BR /&gt;In this case... Is that gigabit just in place to connect those servers, or is that a simplification for a target configuration.&lt;BR /&gt;Will the real config have a switch/router?&lt;BR /&gt;&lt;BR /&gt;What will be the dominant protocol on the wire? If it is NFS then Jumboframe will make a tremendous impact (as per reference above) and any test without it is a waste of time.&lt;BR /&gt;&lt;BR /&gt;Or maybe the gigabit will be used to connect two servers in an Oracle RAC setup with lotsa litle packet for lock communication. In that case latency is more critical than throughput.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 12 Feb 2007 15:02:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942963#M542740</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-02-12T15:02:55Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942964#M542741</link>
      <description>Thanks for the info Hein.&lt;BR /&gt;&lt;BR /&gt;Here's where I am at now. I did the ttcp test against the RP7400 and a windows box and the test reported way better numbers. Even better when I increased the length (following your suggestion) of the bufs written to the NIC.&lt;BR /&gt;&lt;BR /&gt;So it now seems that there's something that I may need to tune/investigate more on the SUN machine side.&lt;BR /&gt;&lt;BR /&gt;You're correct, this is a minimalistic test intended to isolate basic functionality. The final use of these Gigabit Cards will be NFS through a switch. For that purpose I may look into Jumbo Frames as you suggest.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards,&lt;BR /&gt;&lt;BR /&gt;Manuel</description>
      <pubDate>Mon, 12 Feb 2007 16:45:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942964#M542741</guid>
      <dc:creator>Manuel Urena</dc:creator>
      <dc:date>2007-02-12T16:45:25Z</dc:date>
    </item>
    <item>
      <title>Re: Network Performance Gigabit ethernet</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942965#M542742</link>
      <description>Well, being mr netperf, I feel compelled to suggest netperf rather than ttcp, particularly if the mechanism it is using to get CPU utilization is only based on what the process consumed, which for networking is complete bunk.  But I digress...&lt;BR /&gt;&lt;BR /&gt;In broad handwaving terms, as a basic link tenchology, gigabit ethernet, like 100Base-T before it, does NOTHING to make data transfer any easier on the host than what came before.  It takes just as many CPU cycles to send a packet over Gigabit as it did over 100BT as it did over 10Base-T, and as it will over 10 Gigabit.  &lt;BR /&gt;&lt;BR /&gt;Just 10Xing the link rate didn't 10X everything else in the system.  If that 100BT test was consuming say 30% of a/the CPU then you shouldn't expect to get more than 2X to 3X what you got over 100BT before the CPU becomes the bottleneck rather than the link.&lt;BR /&gt;&lt;BR /&gt;Now, specifics of the NIC _implementation_ can make things easier - many Gigabit Ethernet NICs offer ChecKsum Offload (CKO), interrupt coalescing/avoidance and Jumbo Frames, but those are implementation details, not features of the IEEE specs.&lt;BR /&gt;&lt;BR /&gt;It sounds like you have found that the Sun box (what kind?) is the bottleneck here.  I suspect that if you look with either netperf or mpstat you will see that one or more of the CPUs in that box becomes saturated during the test - and that ttcp will not accurately report that.  There are some caveats in measuring CPU util under Solaris - some of my comments in the netperf manual (latest version at:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.netperf.org/svn/netperf2/trunk/doc" target="_blank"&gt;http://www.netperf.org/svn/netperf2/trunk/doc&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;cover that, as well as comments in the relevant netcpu_mumble.c files - replace "doc" with "src" in the URL above.&lt;BR /&gt;&lt;BR /&gt;One other bit - a single TCP connection cannot really make use of more than one or one and a faction's worth of CPU time, so even if you happen to have say 4 CPUs in a box, a single TCP connection can still be limited by the performance of a single CPU.  Sun used to say that their systems required one MegaHertz per Megabit, which at times may have been a triffle optimistic - and you cannot simply take the sum of the megahurts of a Sun and apply the rule of thumb to a single connection.&lt;BR /&gt;&lt;BR /&gt;Finally - the IEEE Gigabit Ethernet spec for UTP (copper) specifies that the PHYs (I think it is the Physical layer) must support something I believe is called AutoMDIX - the upshot of this is that while one still _can_ use a cross-over cable to connect GbE back-to-back, one does not _need_ to use a cross-over cable - a straight-through cable can be used and the NICs will figure it out.  The same holds true for switch to switch connections.</description>
      <pubDate>Tue, 13 Feb 2007 12:33:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-performance-gigabit-ethernet/m-p/3942965#M542742</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-02-13T12:33:43Z</dc:date>
    </item>
  </channel>
</rss>

