HPE 9000 and HPE e3000 Servers
1753266 Members
5218 Online
108792 Solutions
New Discussion юеВ

AD385A and AB287A 10GBe on an rp8420 running 11.23

 
IT Response
Esteemed Contributor

AD385A and AB287A 10GBe on an rp8420 running 11.23

Hi

Can I mix and install 2 x AD385A and 2 x AB287A on the same rp8420 running 11.23?

Any recommendations, best practices for setting up or configuring the cards (HW or OS) to be able to achieve higher speed than 2Gbps?

thanks in advance
2 REPLIES 2
Zygmunt Krawczyk
Honored Contributor

Re: AD385A and AB287A 10GBe on an rp8420 running 11.23

It looks that YES - see HP-UX Ethernet card support matrix

http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02066997/c02066997.pdf
rick jones
Honored Contributor

Re: AD385A and AB287A 10GBe on an rp8420 running 11.23

With what are you measuring throughput? 2 Gbps sounds like a single-stream throughput test.

The default socket buffer and thus TCP window size of 32768 bytes is insufficient for 10GbE. If you are running netperf tests then use a test-specific -s and -S option to increase the socket buffer size:

netperf -H -- -m 16K -s 128K -S 128K

However, I suspect you will still be limited in throughput - likely as not you will find that one of the cores on your rp8420 are saturated - by and large it takes just as many CPU cycles to send or receive a packet over 10GbE as it does over 1 GbE or 100BT or 10BT for that matter. The Ethernet specification in and of itself has done (virtually?) nothing over the years to make data transfer easier on the end systems. What has happened is interface designers have added other features - like TSO/VMTU, Jumbo Frames, Checksum Offload and such.

So, while your test is running, you should look at the CPU utilization of each of the CPUs in your system.

Other things you can enable may include:

*) Jumbo Frames - that will drop the number of trips required up and down the protocol stack to transfer a given quantity of (bulk) data. N.B. - *all* stations in a broadcast domain must have the same MTU.

*) VMTU - IIRC what HP-UX calls TSO or TCP Segmentation Offload - sometimes called "Poor Man's Jumbo Frames" because it gives the appearance of Jumbo Frames to the stack (a bit of handwaving) but does not require upping the MTU. However, the receiver still receives smaller packets and may become the bottleneck

You might try multiple, concurrent streams through the interface(s) - those NICs should spread their interrupts around and bring more than one CPU into play (even without TOPS being enabled) though it will require that multiple "flows" are going through each NIC - and it has been long enough that I can non longer recall how the NICs define a flow.

If those options of VMTU and JF aren't present in 11.23, they should be in 11.31.
there is no rest for the wicked yet the virtuous have no pillows