Showing results for 
Search instead for 
Did you mean: 


Go to solution


I am trying to find if anybody has seen or heard of a 10gb card that will work on the RP7410.
Johnson Punniyalingam
Honored Contributor

Problems are common to all, but attitude makes the difference
Acclaimed Contributor


Have a look:

IMHO there is non, but consider to bundle several 1000's with APA.

Hope this helps!

There are only 10 types of people in the world -
those who understand binary, and those who don't.

No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Bill Hassell
Honored Contributor


For the 10Gbit card mentioned above, it won't work in the rp7410 (and many older products). Even for servers where it does work, you need new cables and new switches with a number of new limitations for distance and compatibility. Here's a link:

If you just need a faster link, APA (AutoPort Aggregation) is much cheaper than a 10Gbit setup. You can easily get 4Gbit wire speeds with 4 GigE connections. APA increases throughput only when there are multiple streams of data (such as 10-20 simultaneous ftp sessions). A single file transfer will only run on a 1 Gbit path. You may also run into HP-UX overhead limitations between the APA driver, the transfer software and the number of of LAN ports in the aggregate. And naturally, the other end of the transfer needs to keep up with the data flow.

Bill Hassell, sysadmin
Honored Contributor


According to the rp7410 User Guide, the fastest slots on a rp7410 are the Twin Turbo PCI slots (66 MHz * 64 bits).

64 bits is 8 bytes, so the theoretical maximum speed of a slot works out to 66M * 8 = 528 Mbytes/s, or about 4.2 Gbps. So the PCI bus of a rp7410 would be a significant bottleneck for any 10 Gbps NIC.

As the plain PCI bus clearly has insufficient bandwidth for a 10 Gbps NIC, it doesn't make much sense to design one.

The faster versions of a PCI-X (PCI-X 266 and 533) have the bandwidth to handle a constant traffic at 10 Gbps, so they would be sensible bus solutions for a 10 Gbps NIC.

The PCIe slots of x8 width or greater would also provide plenty of bus bandwidth for such a NIC. For PCIe 2.0, even a 4x slot could be sufficient.

rick jones
Honored Contributor


As already mentioned, a "plain" PCI slot makes absolutely no sense for a 10 Gigabit Ethernet NIC.

There have been three 10 Gbit/s Ethernet NICs for HP Integrity (not sure if ever supported on later 9000) servers:

AB287A - PCI-X 1.0 133 MHz - bus/slot limited to about 7 Gbit/s - based on Neterion XFrame I

AD385A - PCI-X 2.0 266 MHz - should not be bus/slot limited at least in one direction - based on Neterion XFrame II

AD386A - PCIe 1.1 x8 - roughly speaking the same as a PCI-X 2.0 266 MHz slot - based on Chelsio T3C chip

The AD385A and AD386A are still on the HP CPL. I should note that while they are based on Neterion/Chelsio chips, any old "off the street" Neterion/Chelsio card will *not* be claimed by the HP-UX drivers - the HP-UX drivers look for not just the PCI vendor and product IDs (which will match Neterion/Chelsio) but also the PCI subvendor and subproduct IDs - which will be HP (103c) and then NIC-specific.

APA is certainly one consideration if the HP 9000 does not have 10 Gigabit Ethernet support - keep in mind though it is *aggregate* bandwidth which is increased - the speed of a single "flow" (the definition of which depends on the packet scheduler one selects, and switch behaviour) will not exceed that of a single link. So, an aggregate of 1 Gig links will not give you a TCP connection that goes at N Gbit/s.

Even if there is a 10 Gigabit Ethernet card that will work in the RP7410, the CPUs in that system really aren't up to the task of getting anything worthwhile out of it. It would be better (IMO) to upgrade to a current-generation Integrity system and perhaps rely on Aries for any legacy PA-RISC binaries for which you do not have source.

While I'm up on the soap box, I should point out that in broad handwaving terms, 10 Gigabit Ethernet (as specified by the IEEE) does *nothing* to make sending/receiving data any easier on the host then 1 GbE (or 100BT or 10BT) - it takes just as many CPU cycles to transfer a KB of data through a 10 GbE interface as it does a 1 GbE interface. That means if you have a system that is running at oh, 50% CPU utilization, and you "upgrade" from 1 GbE to 10 GbE, you should not expect to see more than 2 GbE of actual network throughput.

Now, having waved my hands thusly, I will point-out there are NIC implementation specific features that may allow an easier time - for example, 1GbE NICs introduced ChecKsum Offload (CKO) and Tcp Segmentation Offload (TSO - what HP-UX calls VMTU). 100 BT NICs didn't offer those features.

10GbE NICs offer the features of the 1GbE's, and add (speaking broadly, HP-UX 10 G NICs may or may not have them enabled) multiple interrupt queue support, and large receive offload (LRO).

*Still* pontificating :) If though your workload is shoving around mostly sub-MTU packets (eg < 1500 bytes) then TSO and LRO may not mean much. And if your traffic is really small, even CKO may not mean much. At that point, the performance one gets through a NIC will depend rather more on its programming model, the efficiency of the driver, and perhaps multiqueue support more than the actual bitrate supported by the NIC.
there is no rest for the wicked yet the virtuous have no pillows