- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Network Performance Gigabit ethernet
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 04:14 AM
тАО02-12-2007 04:14 AM
Network Performance Gigabit ethernet
I am seeing what I understand is degraded performance from 2 Gigabit Ethernet cards connected via a crossover cable. I am utilizing ttcp to test the cards.
One Gigabit Card is connected to a Sun Box the other is connected to a RP7400.
Below the ttcp output:
SUN SERVER
==========
root# ~/UXTools/ttcp.sun4 -r -s -l1500
ttcp-r: nbuf=1024, buflen=1500, port=2000
ttcp-r: socket
ttcp-r: accept
ttcp-r: 0.1user 1.8sys 0:04real 43% 0i+0d 0maxrss 0+0pf 3050+3124csw
ttcp-r: 150000000 bytes processed
ttcp-r: 2.03 CPU sec = 72159.8 KB/cpu sec, 577278 Kbits/cpu sec
ttcp-r: 4.66284 real sec = 31415.3 KB/real sec, 251322 Kbits/sec
RP7400
======
root# ./ttcp -t -s -l1500 -n100000 172.16.1.31
ttcp-t: nbuf=100000, buflen=1500, port=2000
ttcp-t: socket
ttcp-t: connect
ttcp-t: 0.1user 1.1sys 0:04real 27% 0i+51d 23maxrss 0+0pf 3555+305csw
ttcp-t: 150000000 bytes processed
ttcp-t: 1.28 CPU sec = 114441 KB/cpu sec, 915527 Kbits/cpu sec
ttcp-t: 4.66284 real sec = 31415.3 KB/real sec, 251322 Kbits/sec
As you can see the througput is around 250Mbps/31MBps. I was expecting it to be at least 800 or 900 Mbps or at least 100MBps.
Doing the same test utilizing fast ethernet interfaces yields speeds around 11MBps which is close to the theoretical limit of 12.5MBps.
For gigabit ethernet the theoretical limit says 125MBps or 1000Mbps.
Below some more output to provide additional information:
RP7400
======
root# lanadmin -x 3
Speed = 1000 Full-Duplex.
Autonegotiation = On.
LAN INTERFACE STATUS DISPLAY
Mon, Feb 12,2007 11:06:29
PPA Number = 3
Description = lan3 HP PCI 1000Base-T Release B.11.11.24
Type (value) = ethernet-csmacd(6)
MTU Size = 1500
Speed = 1000000000
Station Address = 0x1321ea67ec
Administration Status (value) = up(1)
Operation Status (value) = up(1)
Last Change = 47462254
Inbound Octets = 2757941744
Inbound Unicast Packets = 182741651
Inbound Non-Unicast Packets = 34105
Inbound Discards = 0
Inbound Errors = 0
Inbound Unknown Protocols = 11
Outbound Octets = 3155951020
Outbound Unicast Packets = 194521462
Outbound Non-Unicast Packets = 169
Outbound Discards = 0
Outbound Errors = 0
Outbound Queue Length = 0
Specific = 655367
Index = 2
Alignment Errors = 0
FCS Errors = 0
Single Collision Frames = 0
Multiple Collision Frames = 0
Deferred Transmissions = 0
Late Collisions = 0
Excessive Collisions = 0
Internal MAC Transmit Errors = 0
Carrier Sense Errors = 0
Frames Too Long = 0
Internal MAC Receive Errors = 0
SUN MACHINE
===========
root# kstat -p ce:0::'/collisions|framing|crc|code_violations|tx_late_collisions/'
ce:0:ce0:code_violations 0
ce:0:ce0:collisions 0
ce:0:ce0:crc_err 0
ce:0:ce0:excessive_collisions 0
ce:0:ce0:late_collisions 0
root# ndd /dev/ce adv_autoneg_cap
1
Any help will be appreciated,
Manuel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 04:27 AM
тАО02-12-2007 04:27 AM
Re: Network Performance Gigabit ethernet
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 04:28 AM
тАО02-12-2007 04:28 AM
Re: Network Performance Gigabit ethernet
Once again we are amazed that real life doesn't measure up to marketing hype.
This is to be expected. You may get better results for a cisco GB switch.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 05:58 AM
тАО02-12-2007 05:58 AM
Re: Network Performance Gigabit ethernet
- "-l 1500" seems exactly wront to me. A little to large to be small, and too small to be large. Would it not cause multiple packets per message. You are not using UDP (-u) so there will be some extra tcp overhead. Why not use the default 8192?
- To see the real potential server-server performance of Gigabit should you not be using JUMBO frames? That reduces teh number of packets dramatically and with that the CPU time.
- To some extend you are measuring CPU power, not network bandwith. The combined systems seem to be more than 50% cpu bound during that test. See Jumbo remark.
http://docs.hp.com/en/783/jumbo_final.pdf
http://www.cisco.com/warp/public/471/ttcp.html
Good luck!
Hein van den Heuvel
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 07:02 AM
тАО02-12-2007 07:02 AM
Re: Network Performance Gigabit ethernet
Ooops, that came out a little more crude than I intended.
We often take shortcuts in our measurement, to make it easier, or because we _think_ we'll make it faster, and so on. But in doing so we may lose track of the what we really should be measuring.
In this case... Is that gigabit just in place to connect those servers, or is that a simplification for a target configuration.
Will the real config have a switch/router?
What will be the dominant protocol on the wire? If it is NFS then Jumboframe will make a tremendous impact (as per reference above) and any test without it is a waste of time.
Or maybe the gigabit will be used to connect two servers in an Oracle RAC setup with lotsa litle packet for lock communication. In that case latency is more critical than throughput.
Regards,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-12-2007 08:45 AM
тАО02-12-2007 08:45 AM
Re: Network Performance Gigabit ethernet
Here's where I am at now. I did the ttcp test against the RP7400 and a windows box and the test reported way better numbers. Even better when I increased the length (following your suggestion) of the bufs written to the NIC.
So it now seems that there's something that I may need to tune/investigate more on the SUN machine side.
You're correct, this is a minimalistic test intended to isolate basic functionality. The final use of these Gigabit Cards will be NFS through a switch. For that purpose I may look into Jumbo Frames as you suggest.
Thanks and regards,
Manuel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-13-2007 04:33 AM
тАО02-13-2007 04:33 AM
Re: Network Performance Gigabit ethernet
In broad handwaving terms, as a basic link tenchology, gigabit ethernet, like 100Base-T before it, does NOTHING to make data transfer any easier on the host than what came before. It takes just as many CPU cycles to send a packet over Gigabit as it did over 100BT as it did over 10Base-T, and as it will over 10 Gigabit.
Just 10Xing the link rate didn't 10X everything else in the system. If that 100BT test was consuming say 30% of a/the CPU then you shouldn't expect to get more than 2X to 3X what you got over 100BT before the CPU becomes the bottleneck rather than the link.
Now, specifics of the NIC _implementation_ can make things easier - many Gigabit Ethernet NICs offer ChecKsum Offload (CKO), interrupt coalescing/avoidance and Jumbo Frames, but those are implementation details, not features of the IEEE specs.
It sounds like you have found that the Sun box (what kind?) is the bottleneck here. I suspect that if you look with either netperf or mpstat you will see that one or more of the CPUs in that box becomes saturated during the test - and that ttcp will not accurately report that. There are some caveats in measuring CPU util under Solaris - some of my comments in the netperf manual (latest version at:
http://www.netperf.org/svn/netperf2/trunk/doc
cover that, as well as comments in the relevant netcpu_mumble.c files - replace "doc" with "src" in the URL above.
One other bit - a single TCP connection cannot really make use of more than one or one and a faction's worth of CPU time, so even if you happen to have say 4 CPUs in a box, a single TCP connection can still be limited by the performance of a single CPU. Sun used to say that their systems required one MegaHertz per Megabit, which at times may have been a triffle optimistic - and you cannot simply take the sum of the megahurts of a Sun and apply the rule of thumb to a single connection.
Finally - the IEEE Gigabit Ethernet spec for UTP (copper) specifies that the PHYs (I think it is the Physical layer) must support something I believe is called AutoMDIX - the upshot of this is that while one still _can_ use a cross-over cable to connect GbE back-to-back, one does not _need_ to use a cross-over cable - a straight-through cable can be used and the NICs will figure it out. The same holds true for switch to switch connections.