BladeSystem - General
1753587 Members
6939 Online
108796 Solutions
New Discussion юеВ

Re: MSI-X on NIC on Blades

 
SOLVED
Go to solution
Alex Xela
Occasional Contributor

MSI-X on NIC on Blades

Hello all!

Can't find any information about MSI-X support in HP ProLiant BL460c G7 Server NICs. Also I'm interested in support multiple RX/TX vectors on those NICs.

As an example ProLiant DL360 G7 has NIC NC365T with MSI-X support. Have not found MSI-X on any other NICs. Also in intel specs about 82580 chip set in NC365T tells that there are "16 traffic causes тАФ 8 Tx, 8 Rx".
It's very usefull when I do CPU affinity of those TX/RX vectors to different CPU to increase throughput. If I do not do this - I got too many interrupts handled by only 1 CPU, that decreases network performance of my system.
Thanks!
7 REPLIES 7
rick jones
Honored Contributor
Solution

Re: MSI-X on NIC on Blades

(At least) Three things need to come together to have MSI-X support active:

1) The underlying system hardware must support it

2) The underlying OS software must support it

3) The NIC must support it.

I am quite confident that the answer to 1) in the context of the BL460c G7 is "Yes."

The answer to 2 depends on the OS and its revision, which you have not specified

The answer to 3 depends on the NIC one wishes to use, and while we might assume you are referring to the LOM on the BL460c G7 it would be good to state that explicitly.


Further, while all three requirements may be met, traffic may still not spread - the nature of the traffic flow(s) through the NIC may or may not result in the NIC's spreading them across queues. So, it would also be good to describe more completely the nature of the traffic you are getting - how many distinct IP addresses and port numbers are involved and such.
there is no rest for the wicked yet the virtuous have no pillows
Alex Xela
Occasional Contributor

Re: MSI-X on NIC on Blades

Thank you very much, Rick, for you answer. That clears for me one part of question.

OS is RHEL 5 or 6. I know, they support MSI-X pretty well, tested with intel NIC's.

And yes, i'm interested in integrated NIC on BL460c G7.

My second part of question was about RX/TX vectors (or causes in Intel specs). Can't find any information about it in HP NIC's specs.
Want to know is there any RX/TX traffic causes in NIC integrated in BL460c G7. If yes - what type of causes and how many of them are supported?
rick jones
Honored Contributor

Re: MSI-X on NIC on Blades

Given the generation of the blade, I would be truly astonished if there was not MSI-X support in the LOM. However, being one of the cobbler's children I've not had a chance to play with a G7 blade. Still, based on the LOM in the G6's with which I've played having MSI-X support, I'm quite confident the LOM in the G7 does.

MSI-X is rather rapidly becoming part of the "background noise" and not the competitive differentiator it once was, which may explain a lack of verbiage touting its presence.

If you have ifconfig'd one of the LOM ports I would then do grep /proc/interrupts after running some traffic through it. There should be one entry per active MSI-X vector/tx/rx queue.
there is no rest for the wicked yet the virtuous have no pillows
Alex Xela
Occasional Contributor

Re: MSI-X on NIC on Blades

All good!


root@bladetest:~# grep eth /proc/interrupts
60: 26 0 0 389 PCI-MSI-edge eth0
61: 230 0 1262 2746 PCI-MSI-edge eth0-rx-0
62: 21 14 444 0 PCI-MSI-edge eth0-rx-1
63: 40 366 167 0 PCI-MSI-edge eth0-rx-2
64: 992 0 717 0 PCI-MSI-edge eth0-rx-3
65: 5 0 0 0 PCI-MSI-edge eth0-tx-0
66: 14 0 0 0 PCI-MSI-edge eth0-tx-1
67: 3 0 0 0 PCI-MSI-edge eth0-tx-2
68: 433 0 0 492 PCI-MSI-edge eth0-tx-3

Here are queues! 4 rx and 4 tx on bl460 g6 with 532i NIC.
I guess in 533i all the same good.

Tested irq affinity - also works well, devices can move from one cpu to another when I change smp_affinity for irq of rx/tx vector.

That's all what I need.
Thank you!
rick jones
Honored Contributor

Re: MSI-X on NIC on Blades

RHEL6 will likely make better use of multiple TX queues than RHEL5. Also, keep in mind that as long as irqbalanced lives on the system, it *will* eventually undo your changes to smp_affinity. At least that has been my experience when running netperf - I have thus gotten into the habit of shooting irqbalanced in the head before I start. That said, my goals with netperf may differ slighly from those of a system in production.
there is no rest for the wicked yet the virtuous have no pillows
Alex Xela
Occasional Contributor

Re: MSI-X on NIC on Blades

Actually I tested it on ubuntu 10.04.1 LTS server.
When I made test on IPSEC performance on same hardware different Linux distributions - debian and ubuntu was fastest. RHEL5 has ~30% less IPSEC performance.

Will check RHEL6 too about performance, may be something changed.

And there some regression in 10.04 ubuntu.
I described it on ubuntuforums http://ubuntuforums.org/showthread.php?p=10025971
9.10 works well.

And turning off irqbalance gives almost +40%-50% performance to IPSEC and network throughput, so it's "must be" option.
Also better so make affinity on last kernel, leaving first 1 or to to other things, by changing default /proc/irq/default_smp_affinity to 1 or 3 value.
Harryy04
Occasional Visitor

Re: MSI-X on NIC on Blades

Alex, I noticed the interrupts are not evenly distributed on all queues. I am seeing similar problem on my setup. Any idea what could cause that?
Thanks
Harry