HPE Aruba Networking & ProVision-based
1833730 Members
2057 Online
110063 Solutions
New Discussion

Re: New Procurve 2920, configure for GBe teaming and 10GBe teaming

 
MonsterMaxx
Occasional Advisor

New Procurve 2920, configure for GBe teaming and 10GBe teaming

Just received my new procurve 2920-48G.  Plugged in the two J9731A modules and 4x J9283B SPF+ cables.

 

2x cables go to a Mellanox Nic in my HP DL380G9 win'12R2 file server and 2x cables go to the same card in a win7 CAD workstation (built, not HP.)

 

Another daily driver workstation (built, not HP) has a  Intel pro1000 pt quadport.

 

Teaming is enabled on all.  The server via win'12R2's built in teaming, the CAD station thru Mellanox's teaming and the daily driver thru intel's teaming.

 

I am looking to maximize performance with these adaptors and the switch. 

Yes, I know, it's overkill, but that's me. :)

 

I'm getting some pretty fast read rates server->cad station, but the writes aren't as great (about 1/3 the reads.)  Disabling the 4x built in GBe nics on the server helped quite a bit, but still...seeing pretty low utilization on the nic and a simple copy/read on the server itself is 2-3x the speed.

 

Not a big deal disabling the GBe nics as I could take 2x of them out of the team and dedicate it to Hyper-V (though that's yet another question.)

 

So my first question is how do I configure the switch for performance on these ports.  I've tried poking buttons and can't seem to find a combo that works.  Googling has lead me nowhere.

 

 

I'm pretty much a button poker.  Poke the button, find out what it does, learn from that and poke the next button.

 

Any assistance would be most appriciated.  Thanks in advance.

3 REPLIES 3
Vince-Whirlwind
Honored Contributor

Re: New Procurve 2920, configure for GBe teaming and 10GBe teaming

 

You don't mention what rate the interfaces are passing traffic at.

 

MonsterMaxx
Occasional Advisor

Re: New Procurve 2920, configure for GBe teaming and 10GBe teaming

Well, when I create ram disks on both machines I'm getting 980MB/s on reads and around 300MB/s on writes.  This discrepancy occurs on both machines reading and writing to each other.  So I am pretty sure it's network related.

 

What I think is going on is that I do not understand how LACP should be configured on the switch and once I learn this I'll be good to go.

 

For example with another switch (procurve 2910 - long story, but the 2920 is the 3rd in a line of hardware in this upgrade) I had problems getting my 'old' server to team and connect to the internet.  Finally gave up on load balancing and just set it to fault tolorance.  Now with the 2920 and a reboot on the new server I have this same problem showing up.

 

I deleted the team, recreated it, and tinkered around and was finally able to get it back up.  performance on a single file is stellar, but when opening a file in my cad application it's dismal.  Not really sure what's going on, but I suspect that it's my lack of understanding on how the 2920 should be setup for teamed adaptors LACP.  Maybe I even need to understand VLAN, though I don't think that's necessary for my purposes.  But what do I know?

 

Bottom line is that I have a

Procurve 2920-48G switch, 2x 10GBe adaptors, 4x SPE+cables

DL380G9 file server w/ the HP Mellanox 10GBe nic (teamed.) Onboard nics are disabled at this time, will be used for Hyper-V once I figure out how to config that correctly.

Win7 workstation w/ the HP Mellanox 10GBe nic (teamed.)  Onboard nic disabled.

Win7 workstation w/ Intel pro/1000 pt quadport GBe nic (teamed.) Onboard nic disabled.

 

Various other android/ios devices, printers, gateway/router, wifi access points and win 7/8.1 & mac computers connected to the switch.  All of which seem fine, it's the teaming stuff that I'm struggling with.

 

Thanks in advance for any assistance you can offer.

 

Vince-Whirlwind
Honored Contributor

Re: New Procurve 2920, configure for GBe teaming and 10GBe teaming

It appears the switch happily passes traffic at (almost) 10Gb/s.

 

The question I'm asking is, why would the switch treat frames any differently if they are related to what you call "reads" or "writes"?

What's different from a Layer2 perspective about "reads" and "writes"?

A "read" means the server is sending frames at 10Gb/s to the switch, which switches them out to the workstation.

A "write" means the workstation is sending frames at 3Gb/s to the switch which switches them out to the server. 

 

All the switch sees are frames, all it does is read the frame header and send the frame on.

The switch doesn't decide how much data gets sent, it's the endpoints that do that, unless either, the switch is configured with flow control/rate limiting, or, you are seeing a great many drops in the switch interface stats.

 

If in doubt, join the workstation to the server with a crossover cable and see how you go.

 

I'm not sure where LACP comes into this - what kind of performance do you get using a single 10Gb interface?

Best to start with single links, then when everything is working as it should, configure LACP.
Once you enable LACP, check out performance and see, is there a difference? and if there isn't, should there be?

 

Don't make the (common) mistake of thinking that LACP will do any load-balancing for you. If you're talking about one server and one workstation, the frames

being sent from the switch will always take the same physical path, no matter how many additional physical links you trunk together.