Array Setup and Networking
cancel
Showing results for 
Search instead for 
Did you mean: 

Importance of Path Change Settings in VMware

 
Highlighted
Trusted Contributor

Importance of Path Change Settings in VMware

With 1 gig networks it is important to tune per path settings for maximum throughput. The default of 1000 IOs per path can cause micro bursts of saturation and limit throughput. I've done a fair amount of testing and found that the best setting is to actually change paths based on the number of bytes sent per path. The reasoning is that it can be detrimental to change paths too often for small block IO. Setting the path change to bytes optimizes for both.

Here is a real world example from a demo I recently conducted. This was done with 4 1G interfaces on both the Nimble array and the ESX host connected through a Cisco 3750X stack.

SQLIO prior to path optimization :

ServerToolTest DescriptionIO/sMB/sAvg. Latency
SQL-06SQLIORandom 8k Writes, 8 threads with 8 qdepth for 120 sec12375974ms
SQL-06SQLIORandom 8k Reads, 8 threads with 8 qdepth for 120 sec144561133ms
SQL-06SQLIOSequential 64k Writes, 8 threads with 8 qdepth for 120 sec213013329ms
SQL-06SQLIOSequential 64k Reads, 8 threads with 8 qdepth for 120 sec214713429ms

SQLIO after path optimization :

ServerToolTest DescriptionIO/sMB/sAvg. Latency
SQL-06SQLIORandom 8k Writes, 8 threads with 8 qdepth for 120 sec268822101ms
SQL-06SQLIORandom 8k Reads, 8 threads with 8 qdepth for 120 sec289642261ms
SQL-06SQLIOSequential 64k Writes, 8 threads with 8 qdepth for 120 sec75244708ms
SQL-06SQLIOSequential 64k Reads, 8 threads with 8 qdepth for 120 sec74744678ms

Notice the large improvement in not only throughput but also the reduction in latency. The latency in the first test was due to the saturation of the 1G links.

The optimization is done with the following command from the ESX 5.x console:

for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do

  esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 0 -t iops;

done

This can also be done with a PowerShell script that is posted here :

Set VMware RoundRobin PSP through PowerCLI

29 REPLIES 29
Highlighted
Valued Contributor

Re: Importance of path change settings in VMware

Why did you choose "-B 262144" as the bytes size, what other options/results did you test with?  I assume this was the best overall balance between IOPS and Throughput?

Highlighted
Trusted Contributor

Re: Importance of path change settings in VMware

I did try several other options. Setting it to 256K per path seemed to be the sweet spot.

Highlighted
Valued Contributor

Re: Importance of path change settings in VMware

This is great Adam Herbert greatly appreciated.  I have seen this in a couple installs now, and will put this in my toolbox of changes for 1G installs.

Highlighted
New Member

Re: Importance of path change settings in VMware

really terrific and important post

Thanks for this.

Highlighted
Not applicable

Re: Importance of path change settings in VMware

Can this command be run on an environment that already has VMs on the iSCSI LUNs without an interruption of service?

Highlighted
Valued Contributor

Re: Importance of path change settings in VMware

Yes, it has an immediate effect and the way the script is written it only affects Nimble Volumes.

It is also relatively simple to modify the script to only affect an individual volume if you so wished.

Phil

Highlighted
Trusted Contributor

Re: Importance of path change settings in VMware

Yes, it is safe to run. No downtime needed.

Highlighted
Trusted Contributor

Re: Importance of path change settings in VMware

have you guys tried iops=0, this essentially ignore the # of IOPS per path before switching, and relies on queue depth.  Essentially poor man's LQD on ESX!  We are trying to do some testing in tech-marketing lab to get some results.

Highlighted
New Member

Re: Importance of path change settings in VMware

Yes setting the policy=iops with both iops=0 and bytes=0 may give better performance, where MPIO doesn't need to wait for 256K before switching path.