Array Setup and Networking
1820614 Members
1827 Online
109626 Solutions
New Discussion

Re: Importance of path change settings in VMware

 
aherbert23
Trusted Contributor

Importance of Path Change Settings in VMware

With 1 gig networks it is important to tune per path settings for maximum throughput. The default of 1000 IOs per path can cause micro bursts of saturation and limit throughput. I've done a fair amount of testing and found that the best setting is to actually change paths based on the number of bytes sent per path. The reasoning is that it can be detrimental to change paths too often for small block IO. Setting the path change to bytes optimizes for both.

Here is a real world example from a demo I recently conducted. This was done with 4 1G interfaces on both the Nimble array and the ESX host connected through a Cisco 3750X stack.

SQLIO prior to path optimization :

ServerToolTest DescriptionIO/sMB/sAvg. Latency
SQL-06SQLIORandom 8k Writes, 8 threads with 8 qdepth for 120 sec12375974ms
SQL-06SQLIORandom 8k Reads, 8 threads with 8 qdepth for 120 sec144561133ms
SQL-06SQLIOSequential 64k Writes, 8 threads with 8 qdepth for 120 sec213013329ms
SQL-06SQLIOSequential 64k Reads, 8 threads with 8 qdepth for 120 sec214713429ms

SQLIO after path optimization :

ServerToolTest DescriptionIO/sMB/sAvg. Latency
SQL-06SQLIORandom 8k Writes, 8 threads with 8 qdepth for 120 sec268822101ms
SQL-06SQLIORandom 8k Reads, 8 threads with 8 qdepth for 120 sec289642261ms
SQL-06SQLIOSequential 64k Writes, 8 threads with 8 qdepth for 120 sec75244708ms
SQL-06SQLIOSequential 64k Reads, 8 threads with 8 qdepth for 120 sec74744678ms

Notice the large improvement in not only throughput but also the reduction in latency. The latency in the first test was due to the saturation of the 1G links.

The optimization is done with the following command from the ESX 5.x console:

for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do

  esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 0 -t iops;

done

This can also be done with a PowerShell script that is posted here :

Set VMware RoundRobin PSP through PowerCLI

29 REPLIES 29
pdavies34
Valued Contributor

Re: Importance of path change settings in VMware

Why did you choose "-B 262144" as the bytes size, what other options/results did you test with?  I assume this was the best overall balance between IOPS and Throughput?

aherbert23
Trusted Contributor

Re: Importance of path change settings in VMware

I did try several other options. Setting it to 256K per path seemed to be the sweet spot.

mandersen81
Valued Contributor

Re: Importance of path change settings in VMware

This is great Adam Herbert greatly appreciated.  I have seen this in a couple installs now, and will put this in my toolbox of changes for 1G installs.

dhamilton113
New Member

Re: Importance of path change settings in VMware

really terrific and important post

Thanks for this.

Not applicable

Re: Importance of path change settings in VMware

Can this command be run on an environment that already has VMs on the iSCSI LUNs without an interruption of service?

pdavies34
Valued Contributor

Re: Importance of path change settings in VMware

Yes, it has an immediate effect and the way the script is written it only affects Nimble Volumes.

It is also relatively simple to modify the script to only affect an individual volume if you so wished.

Phil

aherbert23
Trusted Contributor

Re: Importance of path change settings in VMware

Yes, it is safe to run. No downtime needed.

wen35
Trusted Contributor

Re: Importance of path change settings in VMware

have you guys tried iops=0, this essentially ignore the # of IOPS per path before switching, and relies on queue depth.  Essentially poor man's LQD on ESX!  We are trying to do some testing in tech-marketing lab to get some results.

jwang131
New Member

Re: Importance of path change settings in VMware

Yes setting the policy=iops with both iops=0 and bytes=0 may give better performance, where MPIO doesn't need to wait for 256K before switching path.

aherbert23
Trusted Contributor

Re: Importance of path change settings in VMware

I would be interested in seeing the results of the tests. When I tried using low IOPS per path numbers I saw small block random performance degrade. I did not try setting IOPS per path to 0. I didn't even know that would be a valid input!

Not applicable

Re: Importance of path change settings in VMware

These settings apply/benefit 10Gb?

Thanks

-Craig

wen35
Trusted Contributor

Re: Importance of path change settings in VMware

Yes, definitely.  Assuming you have dual 10G bind to the iSCSI initiatior, PSP_RR will leverage both paths without having to wait for a given path to reach X IOPS or X bytes before the switch.

epedersen22
Occasional Advisor

Re: Importance of path change settings in VMware

Is this (or iops=0/bytes=0) going to end up in the Nimble VMware installation guide?

wen35
Trusted Contributor

Re: Importance of path change settings in VMware

it is making its way to the next edition of the vsphere on Nimble best practices guide - the same will go into the next edition of the vmware integration guide as well.

epedersen22
Occasional Advisor

Re: Importance of path change settings in VMware

Cool - thanks!

Not applicable

Re: Importance of path change settings in VMware

Made the change.  My SQLIO test throughput on 10G on Reads went from ~470 MB/s to ~715 MB/s.  Have a CS240G.  NICE!

-Craig

wen35
Trusted Contributor

Re: Importance of path change settings in VMware

wow nice to hear Craig!

epedersen22
Occasional Advisor

Re: Importance of path change settings in VMware

I may be wrong, but when making the change to --iops=0 & --bytes=0, it looks like you have to set '--type' to 'iops'.  I tried it using '--type=bytes' as written in the script above, but the iops limit didn't change.

Result when run with --type=bytes:

   Device: eui.xxx

   IOOperation Limit: 1000

   Limit Type: Bytes

   Use Active Unoptimized Paths: false

   Byte Limit: 0

After, when run with --type=iops:

   Device: eui.xxx

   IOOperation Limit: 0

   Limit Type: Bytes

   Use Active Unoptimized Paths: false

   Byte Limit: 0

From the help text:

  -t|--type=<str>    

     Set the type of the Round Robin path switching that should be enabled for this device.

     Valid values for type are:   

          bytes:      Set the trigger for path switching based on the number of bytes sent down a path.

          default:     Set the trigger for path switching back to default values.  

          iops:      Set the trigger for path switching based on the number of I/O operations on a path.

Cheers,

Eric

mallocarray12
Occasional Advisor

Re: Importance of path change settings in VMware

I just ran the command twice, one to set bytes, and then one to set IOPS.  Since there is a Limit Type, I'm not sure if it matters if you changed bytes to 0 if the Limit Type is set to Iops.

My notes from another post:

In the SSH console on ESXi 5.1, this command will loop through each datastore setting Bytes to 0, IOPS to 0 and then display the current settings. For some reason, when listing disks, they show up twice, once with their regular ID and a second time with the ID ending in :1 and the settings can't be applied.

for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 0; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 0 ;esxcli storage nmp psp roundrobin deviceconfig get -d $i; done

If you want it set back to defaults how it came out of the box, this will reset Bytes to 10485760, IOPS to 1000,  and Type to default

for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 10485760; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 1000 ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t default; esxcli storage nmp psp roundrobin deviceconfig get -d $i; done

mallocarray12
Occasional Advisor

Re: Importance of path change settings in VMware

I did both Bytes and Iops to 0 with IOPS being set as the active Limit Type.

64 byte 100% reads with 5 workers an a 2 GB test file in IOmeter shows an increase from 1649 IOPS and 102 MB up to 2644 IOPS and 171 MB on a single VMDK over 1x4 GB links.  Writes did not seem to be improved in my case.

mallocarray12
Occasional Advisor

Re: Importance of path change settings in VMware

Adams method of selecting device id is prettier than what I had.  Thanks

Set

for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do

echo $i ;

esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 0;

esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 0 ;

esxcli storage nmp psp roundrobin deviceconfig get -d $i;

done

If you want it set back to defaults how it came out of the box, this will reset Bytes to 10485760, IOPS to 1000,  and Type to default

for i in `ls /vmfs/devices/disks/ | grep eui.` ; do

echo $i ;

esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 10485760;

esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 1000 ;

esxcli storage nmp psp roundrobin deviceconfig set -d $i -t default;

esxcli storage nmp psp roundrobin deviceconfig get -d $i;

done

eysfilm136
Advisor

Re: Importance of path change settings in VMware

I just made the 0 byte changes on our CS220 over 1GB and yikes.... 

Below is testing with SQLIO using 4kb random

nimble.jpg

aherbert23
Trusted Contributor

Re: Importance of path change settings in VMware

Shawn, that's great to see those results!

I would suggest that you try a before and after with a large block (64k) sequential workload. You will probably even see a more dramatic difference.

sam_marshall
New Member

Re: Importance of path change settings in VMware

We are looking to implement the IOPS change and during research we found the following thread:

     

Dave's Tech Resources: ESX iSCSI Round Robin MPIO (Multipath IO) and IOPS (IO per second)

It is suggested that instead of using the IOPS setting to use the BYTES setting, so that each path change would not happen until the packet size was closer to matching the ethernet size.

 

We ran various test (same setup as Adam's) and found that on HOST using normal frame packet size our optimal settings were IOPS=0 BYTES=512,  this gave the best overall read and write numbers.  IOPS=0 BYTES=1400 also gave good numbers (slightly better write times than 512).

Also ran the same SQLIO test using jumbo frames and we could not get any performance increase using any combo of setting (IOPS=0/1000 BYTES=0/512/1400/8800).  The default (IOPS=1000 BYTES=10485760) gave the best overall performance.  The jumbo frame issue might be related to network congestion or our need to upgrade to a CS440 controller (hopefully ordering soon).

Is there a preferred BYTES setting or are we on the right track with either of those options?