BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

Flex 10 transfer speeds

 
Highlighted
Advisor

Flex 10 transfer speeds

Hey Guys,

We have a brand new setup with a BladeSystem and I am having some problems I think with Virtual Connect.

Our Setup:
5 x BL490C G6
2 x Virtual Connect Flex 10
2 x Procuve zl CX4 Modules
2 x Equallogic PS6000VX iSCSI San

We have 5 blades setup with ESX 4 and they connect to the Virtual Connect module with the 2 10GB nics on board. I have one CX4 cable connected to the Virtual Connect module going to the CX4 Module in our core 5400 Switch.
In ESX all of the nics are showing the correct speeds for the Flex Nics.
I have a nic setup for iSCSI that I have given 3GB of bandwith, when I test the disk speed of the VM I can only get 100MB/s and if I test 2 VM's from the same host it only gives half the speed aobut 50MB/s.

What have I done wrong?

Thanks,

Jesse
18 REPLIES 18
Highlighted
Honored Contributor

Re: Flex 10 transfer speeds

What's the speed of the ESX virtual switch the VMs connect to? Also, remember that there is a lot of CPU overhead to this virtual switch.
Highlighted
Honored Contributor

Re: Flex 10 transfer speeds

Does the Equallogic appliance connect at 10Gb directly to the same Procurve?

This sounds like there is a 1Gb link in the path between the 3Gb FlexNIC and the Equallogic SAN.

Highlighted
Advisor

Re: Flex 10 transfer speeds

We have 2 Equallogic SAN's so we end up with 16 x 1GB Ports plugged into the same Procurve switch and 8 of the 1GB ports are active.
Highlighted
Advisor

Re: Flex 10 transfer speeds

I have not seen any place in VMware to set the switch Speed, I see both adapters that I have assigned to the iSCSI switch are 3000 MB Full.
Highlighted
Honored Contributor

Re: Flex 10 transfer speeds

So that sounds like the Equallogic SANs are the bottleneck. Even though you have 8 x 1Gb links active, only one of those will be used for each TCP flow...
Highlighted
Advisor

Re: Flex 10 transfer speeds

I thought as well, when I tested data transfer from a Blade with ESX from a VM I see traffic on all 8 of the nics not just one. Has anyone tested from a 10GB host to a multi 1GB port iscsi SAN target.
Highlighted
Honored Contributor

Re: Flex 10 transfer speeds

When you wrote 3GB did you actually mean gigabits rather than bytes?

Almost no trunking/bonding/aggregate/whatnot solution will spread the segments of a single TCP connection across multiple links. Linux bonding has support for such "round robin" packet scheduling, but it does introduce packet reordering and that can introduce a two steps forward, three steps back situation - out of order traffic causes immediate ACKs from TCP, and if there are enough segments out of order, it will trigger a spurrious fast retransmit, which will consume link bandwidth and suppress the TCP congestion window.

These 8x 1Gig links connected from the Equallogic to the ProCurve switch - exactly which model of ProCurve switch and which rev of firmware? Any particular trunking settings used?

Also, in your previous testing, was it all from one "client" IP address to the "server" IP address of the Equallogic or were multiple IP address pairs involved?
there is no rest for the wicked yet the virtuous have no pillows
Highlighted
Advisor

Re: Flex 10 transfer speeds

Yes sorry that should have been 3 Gb not GB.

We are using an HP ProCurve 5412 Switch with firmware K14.47, as per the document for the Equallogic no special trunking was done. Each 1Gb interface on the array gets an IP and then you define a pool ip that you point your client at so your client only knows the group ip.

So the basic testing so far has been:
1.) Physical Windows server outside the blade enviroment with 2 1 Gb nics in it with 2 IP's, was able to get about 200MB/s on the array.
2.) Pysical Windows Server on a Blade setup with 2 Flex nics with 3 Gb bandwidth each and 2 IP's I was only able to get 100MB/s on the array
3.) ESX server setup with 2 Flex nics with 3 Gb bandwidth and 2 IP's (setup as per MPIO for vmware document) I was only able to get 100MB/s when running 1 VM on 1 host and about 50MB/s when 2 VM's on 1 host.
Highlighted
Honored Contributor

Re: Flex 10 transfer speeds

Thinking about it strictly from an experimental design standpoint I'd wonder what performance you got if you didn't actually define flex nics and let the port(s) run at full 10Gbit through the VC module and the switch.
there is no rest for the wicked yet the virtuous have no pillows