- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- 1000BaseT realistic throughput?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 11:19 AM
тАО08-20-2002 11:19 AM
How do I get more out of my Gb NICs?
I've got two 1Gb ethernet cards (HP A4929A PCI 1000) on two HP-UX 11.0 servers. These NICs/servers are connected directly with a crossover cable. There is no additional networking hardware (hubs, switches, routers) between the boxes.
I'm wondering what the reasonable expected throughput would be. The throughput: 158969-184071 Kbits/sec (from ttcp; the output is attached).
Any thoughts or suggestions appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 11:48 AM
тАО08-20-2002 11:48 AM
Re: 1000BaseT realistic throughput?
including PCI Gigabit Ethernet Performance
http://docs.hp.com/hpux/onlinedocs/netcom/gbe_perf_final.pdf
Perhaps that will help
Berlene
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 01:23 PM
тАО08-20-2002 01:23 PM
Re: 1000BaseT realistic throughput?
BTW: I turned on jumbo frames (i.e., MTU=9000), and the ttcp throughput actually dropped (according to ttcp)!
I found netperf, too. The throughput according to netperf is about in the neighborhood of 45MBytes/s. Attached.
Is this normal?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 01:25 PM
тАО08-20-2002 01:25 PM
Re: 1000BaseT realistic throughput?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 05:27 PM
тАО08-20-2002 05:27 PM
Re: 1000BaseT realistic throughput?
If the two servers are in the 500Mhz range or more, then patches should clear things up. If one of the servers is a D-class then the D-class cannot run fast enough to keep up with an A, L or N-class system.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 05:35 PM
тАО08-20-2002 05:35 PM
SolutionThe 64K window/socket/send buffers in netperf are good, but I think that peak throughput for single stream is with socket buffers larger still - say 128KB.
Also check the CPU util on each of the CPUs. One of them may be pegged if you have lower frequency CPUs.
With JF enabled, and a large enough socket buffer, you should be able to get link-rate on a netperf TCP_STREAM test on anything with 440's or higher on the CPU frequency (iirc)
With a 1500 byte MTU the picture becomes a bit more complicated. You might want to try multiple concurrent streams of traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-20-2002 10:54 PM
тАО08-20-2002 10:54 PM
Re: 1000BaseT realistic throughput?
Check your ndd ip_send_source_quench parameter:
# ndd -get /dev/ip ip_send_source_quench
... if 1 change it to 0 ...
# ndd -set /dev/ip ip_send_source_quench 0
This should speed-up the transfer. Don't forget to enter new value into ndd configuration file (/etc/rc.config.d/nddconf).
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2002 07:02 AM
тАО08-21-2002 07:02 AM
Re: 1000BaseT realistic throughput?
I've got to mull over some on the facts here, and I'll be assigning points soon.
Some other background:
Bill,
Both machines are 16-processor V-Class machines - with 200MHz processors. Would the comments about CPU and architecture still hold true for these machines? One box (serverA) is quite busy. Aggregate load - per uptime or top - is commonly above 1. The other box (serverB) is not so busy.
Rick,
Thanks for pointing out the Nagle algorithm. I'm not a network guru, so I suspect a little more reading in this area is in order.
I'll try the differing packet size. Can you point me to a good source of information about *how* the transmission packet size is determined? Are there tuning parameters at the OS level?
Peter,
The current parameter is set to 1 for ip_send_source_quench. I'd like to understand exactly what this does before changing the value, though.
Thanks again, everyone!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-21-2002 09:42 AM
тАО08-21-2002 09:42 AM
Re: 1000BaseT realistic throughput?
45-50 MByte/s may indeed be the most you will see. If you want to get more out of the NICs, you may have to upgrade the V to something more contemporary. I'm sure we could find a sales-rep more than happy to sell you Superdomes or rp8400s to replace them :)
Second, the PCI busses in a 200 mHz V Class are only 32bit,33MHz (aka PCI-1X), and my understanding is that the liklihood of achieving link-rate on such a bus is quite small. Especially if there is anything else going-on on the bus at the time (V Class systems have shared-buss PCI slots).
Even if you upgraded the V to faster CPU's, you still have at best a shared PCI-2X bus on the higher-end V Class systems, where one has single-slot busses on other systems.
MSS selection by TCP is a function of the link-local MTU, whether or not the destination is local/remote, what the remote states for the MSS, and to an extent the socket buffer size. If the destination is on the local net, and has requested an MSS >= that the link-local MTU supports, we will use link-local MTU-40 bytes as the MSS (ie TCP over ethernet uses a 1460 byte MSS). If the remote is remote, and we are using PathMTU discovery, we still use 1460. If PathMTU discovery is disabled and the remote is remote, we use 536 bytes.