ProLiant Servers (ML,DL,SL)

10GBe weird Performance Problems ... (DL380G9)

Patrick Neuner
Regular Advisor

10GBe weird Performance Problems ... (DL380G9)

Hey there,

we just installed a new DL380G9 (H5) in our Rack and noticed slow 10GBe Network Performance while migrating VMs to the new Server. Also Ping-Times went high on new Server - resulting in slowdown of all already migrated Machines (Network wise).

Firmware/Drivers are all up2date (SPP 06.2018), on all Servers there is a NIC TEAM - 10GBe NIC = active, 1GBe NIC = Standby in case of 10GBe Network Troubles.

Now I did some Performance Tests with NTttcp - results:
    H3 = Windows 2012R2
    H4 = Windows 2012R2
    H5 = Windows 2016
    Z240 = Windows 10

H3 (DL380G8) --> H5 (DL380G9) --> 229MB/s
H4 (DL380G9) --> H5 (DL380G9) --> 287MB/s
H3 (DL380G8) --> H4 (DL380G9) --> 399MB/s

H3 (DL380G8) --> Z240 Workstation --> 730MB/s
H4 (DL380G9) --> Z240 Workstation --> 620MB/s
H5 (DL380G9) --> Z240 Workstation --> 773MB/s

Z240 Workstation --> H5 (DL380G9) --> 299MB/s
Z240 Workstation --> H4 (DL380G9) --> 422MB/s
Z240 Workstation --> H3 (DL380G8) --> 226MB/s

Look pretty weird to me - why would Speed be more than double so high to a workstation than to the other Servers? Servers + Workstation are on the same 10GBe Switch.
It seems like only receive is slow on all Servers?

Not quite sure how to further Debug this issue and hoping someone here can give me a Hint in the right Direction?

thx, bye from rainy Austria
Andreas Schnederle-Wagner

Patrick Neuner
Regular Advisor

Re: 10GBe weird Performance Problems ... (DL380G9)

It seems like the Windows NIC TEAMING is the root cause of this massive slowdown ... ?!?

I just disabled NIC Teaming on H3 - and got this Result:

Z240 Workstation --> H3 (DL380G8)       with NIC Teaming -->  226MB/s
Z240 Workstation --> H3 (DL380G8) without NIC Teaming --> 1.100MB/s

H3 (DL380G8)       with NIC Teaming --> Z240 Workstation --> 730MB/s
H3 (DL380G8) without NIC Teaming --> Z240 Workstation -->  916MB/s

Anyone also experienced such problems? Any Idea how to solve it?
As I don't want to loose Backup Connection in Case of 10GBe Switch Failure ... ?!


Thomas Martin
Trusted Contributor

Re: 10GBe weird Performance Problems ... (DL380G9)

You can try a nic teaming where you have one port active and the other standby. Then you have a failover scenario. You can use two indipendent ports at the switch. If you are using nic teaming with both ports active you need some configurations at the switch. For Cisco you need an etherchannel and on the serverside LACP. Be aware: Cisco LACP is not Windows LACP.


Patrick Neuner
Regular Advisor

Re: 10GBe weird Performance Problems ... (DL380G9)

It's already active/passive and spread over 2 Switches. So it makes not much sense that it's working full speed without Team - and slow in Team where only 1 (the same as standalone) NIC is active ... :-/

Honored Contributor

Re: 10GBe weird Performance Problems ... (DL380G9)

If NIC device drivers permit (and, generally, on such Server class hardware will and software too) why not simply go with LACP (IEEE 802.3ad) on both ends (Server and Switch)? maybe you can't deploy a LACP vNIC on VMWare vSphere side since to deploy LACP you must have deployed vDS (Distributed Switch) and not simply just vSS (Standard Switch). Is this the case?

I'm not an HPE Employee
Kudos and Accepted Solution banner
Patrick Neuner
Regular Advisor

Re: 10GBe weird Performance Problems ... (DL380G9)

LACP can't be used because the 2 NICs are attached to 2 different Switches. As we need Switch redundancy ...