MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

P2000 MPIO Poor Read Performance but Good Write Performance

 
Travis Langhals
Occasional Visitor

P2000 MPIO Poor Read Performance but Good Write Performance

I am testing a P2000 using 2 1GbE links to the active controller but only seeing increased write performance with MPIO. It seems I should be getting similar read & write performance as the Ethernet links are the limiting factor. I have tested using IOMeter, Atto, and SQLIO using at least 4 threads with the same results. My setup and results are as follows. Any ideas why I'm not seeing a similar increase in read performance as I do with write performance?

SAN Storage: 6 x 300GB 10K in Raid 10
Switch: 5406zl (Have tested with all combinations of Jumbo Frames & Flow Control with the matching settings on the SAN and NICs)
Server NIC: NC382T (tried with and without ISCSI acceleration enabled)
Server OS: Windows Server 2008 R2
MPIO: Round Robin with Subset (Round Robin is not available)

Single 1GbE:
Max Read: ~130MBps
Max Write: ~125MBps

Dual 1GbE:
Max Read: ~130MBps
Max Write: ~240MBps

5 REPLIES 5
ITK-Andreas
Regular Visitor

Re: P2000 MPIO Poor Read Performance but Good Write Performance

Hi, I'll second this. Getting similar performance in vmWare ESXi 4.1 with dual active paths. I haven't measured the write but the read seems limited to around 100-120MB/s. I am using a 8 disk RAID10 vDisk.


I am also using NC382T cards (dual cards with dual ports. Using active paths on different cards. Servers are both DL380 G6 and DL385 G5's.


Any solution?

 

 

Bart_Heungens
Honored Contributor

Re: P2000 MPIO Poor Read Performance but Good Write Performance

Hi,

 

This is how TCP connections work, this has nothing to do with IOmeter or the P2000...

 

Once you setup a connection, from that moment on all trafic of that communication will pass through 1 NIC...

 

If you want to test the maximum throughput, start multiple IOmeter sessions on multiple servers at the same time... These are multiple sessions which will be spread accross the multiple interfaces of the P2000...

 

 

Kr,

Bart

--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
ITK-Andreas
Regular Visitor

Re: P2000 MPIO Poor Read Performance but Good Write Performance

Hmm, I was told that atleast ESXi is supposed to be able to spread the IO across the interface. Whenever I watch the ESXi-hosts NIC's traffic rates they spread across both interfaces (with around 440Mb/s each, or 880Mb/s when only using a single interface).

 

Roundrobin is set to 1 IOPS with 64k reads/writes before going to the other NIC.

Bart_Heungens
Honored Contributor

Re: P2000 MPIO Poor Read Performance but Good Write Performance

This is correct. ESXi spreads the traffic accross multiple NICs...

However: if you have only 1 VM with 1 interface with a ping, this is 1 session that will pass on 1 NIC... It is only with multiple VM's with multiple sessions to several clients that all the sessions will be spread by means of round robin...

 

Round Robin is not the same like Link Aggregation...

 

For round robin you don't have to configure anything on the switch, for link aggregation it is...

 

These are 2 different things...

--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
ITK-Andreas
Regular Visitor

Re: P2000 MPIO Poor Read Performance but Good Write Performance

Yes, I understand that but this is using a datastore for benchmarkning with ISCSI.


According to the Best practice for P2000 and ESXi we are supposed to get increased performance when pathswitching at every IO.

 

Best practice for changing the default PSP option
As a best practice, change the default PSP option for VMW_SATP_ALUA to VMW_PSP_RR for P2000 G3 SAN environments. Secondly, for optimal default system performance with the P2000 G3, configure the round robin load balancing selection to IOPS with a value of 1. This has to be done for every LUN using the command:

esxcli nmp round robin setconfig --type “iops” --iops 1 --device naa.xxxxxxxxx

 

Also the DSS v6 has a howto for "fixing the ISCSI 1G limit"

http://www.open-e.com/solutions/open-e-dss-v6-mpio-vmware-esx-40/

 

There is also a thread on the VMWare forum regarding this and the solution was to do the 1 IOPS RR:

http://communities.vmware.com/message/1281730

 

I am mostly curious if this is a speed/performance limit somehow of the P2000 G3-box. Feel kinda fooled to have bought 4xISCSI SP's if this is the performance we should expect.