- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Re: Importance of path change settings in VMware
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2013 08:23 AM
тАО03-12-2013 08:23 AM
Importance of Path Change Settings in VMware
With 1 gig networks it is important to tune per path settings for maximum throughput. The default of 1000 IOs per path can cause micro bursts of saturation and limit throughput. I've done a fair amount of testing and found that the best setting is to actually change paths based on the number of bytes sent per path. The reasoning is that it can be detrimental to change paths too often for small block IO. Setting the path change to bytes optimizes for both.
Here is a real world example from a demo I recently conducted. This was done with 4 1G interfaces on both the Nimble array and the ESX host connected through a Cisco 3750X stack.
SQLIO prior to path optimization :
Server | Tool | Test Description | IO/s | MB/s | Avg. Latency |
---|---|---|---|---|---|
SQL-06 | SQLIO | Random 8k Writes, 8 threads with 8 qdepth for 120 sec | 12375 | 97 | 4ms |
SQL-06 | SQLIO | Random 8k Reads, 8 threads with 8 qdepth for 120 sec | 14456 | 113 | 3ms |
SQL-06 | SQLIO | Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec | 2130 | 133 | 29ms |
SQL-06 | SQLIO | Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec | 2147 | 134 | 29ms |
SQLIO after path optimization :
Server | Tool | Test Description | IO/s | MB/s | Avg. Latency |
---|---|---|---|---|---|
SQL-06 | SQLIO | Random 8k Writes, 8 threads with 8 qdepth for 120 sec | 26882 | 210 | 1ms |
SQL-06 | SQLIO | Random 8k Reads, 8 threads with 8 qdepth for 120 sec | 28964 | 226 | 1ms |
SQL-06 | SQLIO | Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec | 7524 | 470 | 8ms |
SQL-06 | SQLIO | Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec | 7474 | 467 | 8ms |
Notice the large improvement in not only throughput but also the reduction in latency. The latency in the first test was due to the saturation of the 1G links.
The optimization is done with the following command from the ESX 5.x console:
for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do
esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 0 -t iops;
done
This can also be done with a PowerShell script that is posted here :
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2013 10:44 AM
тАО03-12-2013 10:44 AM
Re: Importance of path change settings in VMware
Why did you choose "-B 262144" as the bytes size, what other options/results did you test with? I assume this was the best overall balance between IOPS and Throughput?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2013 03:45 PM
тАО03-12-2013 03:45 PM
Re: Importance of path change settings in VMware
I did try several other options. Setting it to 256K per path seemed to be the sweet spot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2013 10:00 PM
тАО03-12-2013 10:00 PM
Re: Importance of path change settings in VMware
This is great Adam Herbert greatly appreciated. I have seen this in a couple installs now, and will put this in my toolbox of changes for 1G installs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2013 08:02 PM
тАО03-13-2013 08:02 PM
Re: Importance of path change settings in VMware
really terrific and important post
Adam Herbert Thanks for this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2013 09:45 AM
тАО03-14-2013 09:45 AM
Re: Importance of path change settings in VMware
Can this command be run on an environment that already has VMs on the iSCSI LUNs without an interruption of service?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2013 09:50 AM
тАО03-14-2013 09:50 AM
Re: Importance of path change settings in VMware
Yes, it has an immediate effect and the way the script is written it only affects Nimble Volumes.
It is also relatively simple to modify the script to only affect an individual volume if you so wished.
Phil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2013 09:50 AM
тАО03-14-2013 09:50 AM
Re: Importance of path change settings in VMware
Yes, it is safe to run. No downtime needed.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2013 11:02 AM
тАО03-14-2013 11:02 AM
Re: Importance of path change settings in VMware
have you guys tried iops=0, this essentially ignore the # of IOPS per path before switching, and relies on queue depth. Essentially poor man's LQD on ESX! We are trying to do some testing in tech-marketing lab to get some results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2013 01:40 PM
тАО03-14-2013 01:40 PM
Re: Importance of path change settings in VMware
Yes setting the policy=iops with both iops=0 and bytes=0 may give better performance, where MPIO doesn't need to wait for 256K before switching path.