BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Flex 10 transfer speeds

Izula
Advisor

Flex 10 transfer speeds

Hey Guys,

We have a brand new setup with a BladeSystem and I am having some problems I think with Virtual Connect.

Our Setup:
5 x BL490C G6
2 x Virtual Connect Flex 10
2 x Procuve zl CX4 Modules
2 x Equallogic PS6000VX iSCSI San

We have 5 blades setup with ESX 4 and they connect to the Virtual Connect module with the 2 10GB nics on board. I have one CX4 cable connected to the Virtual Connect module going to the CX4 Module in our core 5400 Switch.
In ESX all of the nics are showing the correct speeds for the Flex Nics.
I have a nic setup for iSCSI that I have given 3GB of bandwith, when I test the disk speed of the VM I can only get 100MB/s and if I test 2 VM's from the same host it only gives half the speed aobut 50MB/s.

What have I done wrong?

Thanks,

Jesse
18 REPLIES
David Claypool
Honored Contributor

Re: Flex 10 transfer speeds

What's the speed of the ESX virtual switch the VMs connect to? Also, remember that there is a lot of CPU overhead to this virtual switch.
HEM_2
Honored Contributor

Re: Flex 10 transfer speeds

Does the Equallogic appliance connect at 10Gb directly to the same Procurve?

This sounds like there is a 1Gb link in the path between the 3Gb FlexNIC and the Equallogic SAN.

Izula
Advisor

Re: Flex 10 transfer speeds

We have 2 Equallogic SAN's so we end up with 16 x 1GB Ports plugged into the same Procurve switch and 8 of the 1GB ports are active.
Izula
Advisor

Re: Flex 10 transfer speeds

I have not seen any place in VMware to set the switch Speed, I see both adapters that I have assigned to the iSCSI switch are 3000 MB Full.
HEM_2
Honored Contributor

Re: Flex 10 transfer speeds

So that sounds like the Equallogic SANs are the bottleneck. Even though you have 8 x 1Gb links active, only one of those will be used for each TCP flow...
Izula
Advisor

Re: Flex 10 transfer speeds

I thought as well, when I tested data transfer from a Blade with ESX from a VM I see traffic on all 8 of the nics not just one. Has anyone tested from a 10GB host to a multi 1GB port iscsi SAN target.
rick jones
Honored Contributor

Re: Flex 10 transfer speeds

When you wrote 3GB did you actually mean gigabits rather than bytes?

Almost no trunking/bonding/aggregate/whatnot solution will spread the segments of a single TCP connection across multiple links. Linux bonding has support for such "round robin" packet scheduling, but it does introduce packet reordering and that can introduce a two steps forward, three steps back situation - out of order traffic causes immediate ACKs from TCP, and if there are enough segments out of order, it will trigger a spurrious fast retransmit, which will consume link bandwidth and suppress the TCP congestion window.

These 8x 1Gig links connected from the Equallogic to the ProCurve switch - exactly which model of ProCurve switch and which rev of firmware? Any particular trunking settings used?

Also, in your previous testing, was it all from one "client" IP address to the "server" IP address of the Equallogic or were multiple IP address pairs involved?
there is no rest for the wicked yet the virtuous have no pillows
Izula
Advisor

Re: Flex 10 transfer speeds

Yes sorry that should have been 3 Gb not GB.

We are using an HP ProCurve 5412 Switch with firmware K14.47, as per the document for the Equallogic no special trunking was done. Each 1Gb interface on the array gets an IP and then you define a pool ip that you point your client at so your client only knows the group ip.

So the basic testing so far has been:
1.) Physical Windows server outside the blade enviroment with 2 1 Gb nics in it with 2 IP's, was able to get about 200MB/s on the array.
2.) Pysical Windows Server on a Blade setup with 2 Flex nics with 3 Gb bandwidth each and 2 IP's I was only able to get 100MB/s on the array
3.) ESX server setup with 2 Flex nics with 3 Gb bandwidth and 2 IP's (setup as per MPIO for vmware document) I was only able to get 100MB/s when running 1 VM on 1 host and about 50MB/s when 2 VM's on 1 host.
rick jones
Honored Contributor

Re: Flex 10 transfer speeds

Thinking about it strictly from an experimental design standpoint I'd wonder what performance you got if you didn't actually define flex nics and let the port(s) run at full 10Gbit through the VC module and the switch.
there is no rest for the wicked yet the virtuous have no pillows
Izula
Advisor

Re: Flex 10 transfer speeds

I thought about that today and I am trying it now but it looks to give about the same results.
rick jones
Honored Contributor

Re: Flex 10 transfer speeds

Is that bare iron talking to the Equallogic or a guest?

Some really pedantic questions, but the topology goes 10G from the chassis to the 5400, and 8x1G links directly from the same 5400 to the equallogic device right? And there is not even the physical possibility that some other uplink besides the port with the CX4 running at 10G is in use to the 5400?
there is no rest for the wicked yet the virtuous have no pillows
Izula
Advisor

Re: Flex 10 transfer speeds

There should be nothing else on the CX4 link, we just installed them for this project and have no other CX4 devices.

I have tested both a bare iron computer running windows and a Blade with ESX running windows and both gave about the same results.

rick jones
Honored Contributor

Re: Flex 10 transfer speeds

Are there no other uplinks from the Flex-10 module at all?

Do the port stats on the 5400 for its CX4 port show traffic when you are running the tests?
there is no rest for the wicked yet the virtuous have no pillows
Izula
Advisor

Re: Flex 10 transfer speeds

Correct on both accounts, the only connection out from the Virtual Connect is one CX4 cable per Virtual Connect Module.

Yes the CX4 ports show traffic on the ports when doing the tests.

rick jones
Honored Contributor

Re: Flex 10 transfer speeds

And how about the 1G ports to the equallogics (four active per yes?) - how many are showing traffic when you test from the blade(s)? Check both inbound and outbound if you can.

Other pseudo-random questions - what are the specific IP and MAC addresses involved? I don't know the details of the packet scheduling algorithm(s) in the 5400 but someone else watching might and knowing the addressing involved would be helpful.
there is no rest for the wicked yet the virtuous have no pillows
HEM_2
Honored Contributor

Re: Flex 10 transfer speeds

page 12-37 here:

http://cdn.procurve.com/training/Manuals/3500-5400-6200-6600-8200-MCG-Sept09-12-PortTrunk.pdf

talks about the traffic distribution algorithm for trunks on the Procurve 5400.

basically, for IP traffic it uses the last 5 bits of the source and destination IP address to determine which link to use.
Izula
Advisor

Re: Flex 10 transfer speeds

Yes the Equallogic has 4 active ports per unit and we have 2 units so we have 8 active ports plugged into 1Gb ports on the switch.

When I run testes on the storage I see traffic on all of them.
DaveNCPA
Occasional Visitor

Re: Flex 10 transfer speeds

Hey guys please keep in mind the the 5400 series and 8200 series switches even though they have cards in them with 4 10gbe ports your only sharing 28.8gb per slot.....

gbic slots 1 and 4 share a 14.4gb channel and 2 and 3 share another, so in practice your 10gbe connections in slots 1 and 4 are only 7.2gb a piece at max, if you want the full 10gbe uplink you either have to leave slot 4 empty.

http://www.hp.com/rnd/support/faqs/8212zl.htm#questionACC4

scroll down to about 2/3rds of the page