- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Re: Importance of path change settings in VMware
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-12-2013 08:23 AM
03-12-2013 08:23 AM
Importance of Path Change Settings in VMware
With 1 gig networks it is important to tune per path settings for maximum throughput. The default of 1000 IOs per path can cause micro bursts of saturation and limit throughput. I've done a fair amount of testing and found that the best setting is to actually change paths based on the number of bytes sent per path. The reasoning is that it can be detrimental to change paths too often for small block IO. Setting the path change to bytes optimizes for both.
Here is a real world example from a demo I recently conducted. This was done with 4 1G interfaces on both the Nimble array and the ESX host connected through a Cisco 3750X stack.
SQLIO prior to path optimization :
Server | Tool | Test Description | IO/s | MB/s | Avg. Latency |
---|---|---|---|---|---|
SQL-06 | SQLIO | Random 8k Writes, 8 threads with 8 qdepth for 120 sec | 12375 | 97 | 4ms |
SQL-06 | SQLIO | Random 8k Reads, 8 threads with 8 qdepth for 120 sec | 14456 | 113 | 3ms |
SQL-06 | SQLIO | Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec | 2130 | 133 | 29ms |
SQL-06 | SQLIO | Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec | 2147 | 134 | 29ms |
SQLIO after path optimization :
Server | Tool | Test Description | IO/s | MB/s | Avg. Latency |
---|---|---|---|---|---|
SQL-06 | SQLIO | Random 8k Writes, 8 threads with 8 qdepth for 120 sec | 26882 | 210 | 1ms |
SQL-06 | SQLIO | Random 8k Reads, 8 threads with 8 qdepth for 120 sec | 28964 | 226 | 1ms |
SQL-06 | SQLIO | Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec | 7524 | 470 | 8ms |
SQL-06 | SQLIO | Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec | 7474 | 467 | 8ms |
Notice the large improvement in not only throughput but also the reduction in latency. The latency in the first test was due to the saturation of the 1G links.
The optimization is done with the following command from the ESX 5.x console:
for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 0 -t iops;
done
This can also be done with a PowerShell script that is posted here :
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-12-2013 10:44 AM
03-12-2013 10:44 AM
Re: Importance of path change settings in VMware
Why did you choose "-B 262144" as the bytes size, what other options/results did you test with? I assume this was the best overall balance between IOPS and Throughput?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-12-2013 03:45 PM
03-12-2013 03:45 PM
Re: Importance of path change settings in VMware
I did try several other options. Setting it to 256K per path seemed to be the sweet spot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-12-2013 10:00 PM
03-12-2013 10:00 PM
Re: Importance of path change settings in VMware
This is great Adam Herbert greatly appreciated. I have seen this in a couple installs now, and will put this in my toolbox of changes for 1G installs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-13-2013 08:02 PM
03-13-2013 08:02 PM
Re: Importance of path change settings in VMware
really terrific and important post
Adam Herbert Thanks for this.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 09:45 AM
03-14-2013 09:45 AM
Re: Importance of path change settings in VMware
Can this command be run on an environment that already has VMs on the iSCSI LUNs without an interruption of service?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 09:50 AM
03-14-2013 09:50 AM
Re: Importance of path change settings in VMware
Yes, it has an immediate effect and the way the script is written it only affects Nimble Volumes.
It is also relatively simple to modify the script to only affect an individual volume if you so wished.
Phil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 09:50 AM
03-14-2013 09:50 AM
Re: Importance of path change settings in VMware
Yes, it is safe to run. No downtime needed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 11:02 AM
03-14-2013 11:02 AM
Re: Importance of path change settings in VMware
have you guys tried iops=0, this essentially ignore the # of IOPS per path before switching, and relies on queue depth. Essentially poor man's LQD on ESX! We are trying to do some testing in tech-marketing lab to get some results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 01:40 PM
03-14-2013 01:40 PM
Re: Importance of path change settings in VMware
Yes setting the policy=iops with both iops=0 and bytes=0 may give better performance, where MPIO doesn't need to wait for 256K before switching path.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2013 07:18 PM
03-14-2013 07:18 PM
Re: Importance of path change settings in VMware
I would be interested in seeing the results of the tests. When I tried using low IOPS per path numbers I saw small block random performance degrade. I did not try setting IOPS per path to 0. I didn't even know that would be a valid input!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2013 09:21 AM
04-10-2013 09:21 AM
Re: Importance of path change settings in VMware
These settings apply/benefit 10Gb?
Thanks
-Craig
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-10-2013 10:04 AM
04-10-2013 10:04 AM
Re: Importance of path change settings in VMware
Yes, definitely. Assuming you have dual 10G bind to the iSCSI initiatior, PSP_RR will leverage both paths without having to wait for a given path to reach X IOPS or X bytes before the switch.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2013 10:06 AM
04-18-2013 10:06 AM
Re: Importance of path change settings in VMware
Is this (or iops=0/bytes=0) going to end up in the Nimble VMware installation guide?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2013 10:55 AM
04-18-2013 10:55 AM
Re: Importance of path change settings in VMware
it is making its way to the next edition of the vsphere on Nimble best practices guide - the same will go into the next edition of the vmware integration guide as well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-18-2013 03:09 PM
04-18-2013 03:09 PM
Re: Importance of path change settings in VMware
Cool - thanks!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-20-2013 02:32 PM
04-20-2013 02:32 PM
Re: Importance of path change settings in VMware
Made the change. My SQLIO test throughput on 10G on Reads went from ~470 MB/s to ~715 MB/s. Have a CS240G. NICE!
-Craig
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-22-2013 10:13 AM
04-22-2013 10:13 AM
Re: Importance of path change settings in VMware
wow nice to hear Craig!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2013 04:40 PM
04-25-2013 04:40 PM
Re: Importance of path change settings in VMware
I may be wrong, but when making the change to --iops=0 & --bytes=0, it looks like you have to set '--type' to 'iops'. I tried it using '--type=bytes' as written in the script above, but the iops limit didn't change.
Result when run with --type=bytes:
Device: eui.xxx
IOOperation Limit: 1000
Limit Type: Bytes
Use Active Unoptimized Paths: false
Byte Limit: 0
After, when run with --type=iops:
Device: eui.xxx
IOOperation Limit: 0
Limit Type: Bytes
Use Active Unoptimized Paths: false
Byte Limit: 0
From the help text:
-t|--type=<str>
Set the type of the Round Robin path switching that should be enabled for this device.
Valid values for type are:
bytes: Set the trigger for path switching based on the number of bytes sent down a path.
default: Set the trigger for path switching back to default values.
iops: Set the trigger for path switching based on the number of I/O operations on a path.
Cheers,
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2013 09:25 AM
05-03-2013 09:25 AM
Re: Importance of path change settings in VMware
I just ran the command twice, one to set bytes, and then one to set IOPS. Since there is a Limit Type, I'm not sure if it matters if you changed bytes to 0 if the Limit Type is set to Iops.
My notes from another post:
In the SSH console on ESXi 5.1, this command will loop through each datastore setting Bytes to 0, IOPS to 0 and then display the current settings. For some reason, when listing disks, they show up twice, once with their regular ID and a second time with the ID ending in :1 and the settings can't be applied.
for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 0; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 0 ;esxcli storage nmp psp roundrobin deviceconfig get -d $i; done
If you want it set back to defaults how it came out of the box, this will reset Bytes to 10485760, IOPS to 1000, and Type to default
for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 10485760; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 1000 ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t default; esxcli storage nmp psp roundrobin deviceconfig get -d $i; done
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2013 09:31 AM
05-03-2013 09:31 AM
Re: Importance of path change settings in VMware
I did both Bytes and Iops to 0 with IOPS being set as the active Limit Type.
64 byte 100% reads with 5 workers an a 2 GB test file in IOmeter shows an increase from 1649 IOPS and 102 MB up to 2644 IOPS and 171 MB on a single VMDK over 1x4 GB links. Writes did not seem to be improved in my case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2013 09:42 AM
05-03-2013 09:42 AM
Re: Importance of path change settings in VMware
Adams method of selecting device id is prettier than what I had. Thanks
Set
for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
echo $i ;
esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 0;
esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 0 ;
esxcli storage nmp psp roundrobin deviceconfig get -d $i;
done
If you want it set back to defaults how it came out of the box, this will reset Bytes to 10485760, IOPS to 1000, and Type to default
for i in `ls /vmfs/devices/disks/ | grep eui.` ; do
echo $i ;
esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 10485760;
esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 1000 ;
esxcli storage nmp psp roundrobin deviceconfig set -d $i -t default;
esxcli storage nmp psp roundrobin deviceconfig get -d $i;
done
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2013 12:34 PM
05-03-2013 12:34 PM
Re: Importance of path change settings in VMware
I just made the 0 byte changes on our CS220 over 1GB and yikes....
Below is testing with SQLIO using 4kb random
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2013 04:17 PM
05-03-2013 04:17 PM
Re: Importance of path change settings in VMware
Shawn, that's great to see those results!
I would suggest that you try a before and after with a large block (64k) sequential workload. You will probably even see a more dramatic difference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-21-2013 09:08 AM
08-21-2013 09:08 AM
Re: Importance of path change settings in VMware
We are looking to implement the IOPS change and during research we found the following thread:
Dave's Tech Resources: ESX iSCSI Round Robin MPIO (Multipath IO) and IOPS (IO per second)
It is suggested that instead of using the IOPS setting to use the BYTES setting, so that each path change would not happen until the packet size was closer to matching the ethernet size.
We ran various test (same setup as Adam's) and found that on HOST using normal frame packet size our optimal settings were IOPS=0 BYTES=512, this gave the best overall read and write numbers. IOPS=0 BYTES=1400 also gave good numbers (slightly better write times than 512).
Also ran the same SQLIO test using jumbo frames and we could not get any performance increase using any combo of setting (IOPS=0/1000 BYTES=0/512/1400/8800). The default (IOPS=1000 BYTES=10485760) gave the best overall performance. The jumbo frame issue might be related to network congestion or our need to upgrade to a CS440 controller (hopefully ordering soon).
Is there a preferred BYTES setting or are we on the right track with either of those options?