<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Importance of path change settings in VMware in Array Setup and Networking</title>
    <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983869#M1016</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I may be wrong, but when making the change to --iops=0 &amp;amp; --bytes=0, it looks like you have to set '--type' to 'iops'.&amp;nbsp; I tried it using '--type=bytes' as written in the script above, but the iops limit didn't change.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Result when run with --type=bytes:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: eui.xxx&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;IOOperation Limit: 1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Limit Type: Bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Use Active Unoptimized Paths: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Byte Limit: 0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;After, when run with --type=iops:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: eui.xxx&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;IOOperation Limit: 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Limit Type: Bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Use Active Unoptimized Paths: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Byte Limit: 0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;From the help text:&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; -t|--type=&amp;lt;str&amp;gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the type of the Round Robin path switching that should be enabled for this device.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Valid values for type are:&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; bytes:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching based on the number of bytes sent down a path.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; default:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching back to default values.&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; iops:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching based on the number of I/O operations on a path.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 25 Apr 2013 23:40:59 GMT</pubDate>
    <dc:creator>epedersen22</dc:creator>
    <dc:date>2013-04-25T23:40:59Z</dc:date>
    <item>
      <title>Importance of Path Change Settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983851#M998</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;With 1 gig networks it is important to tune per path settings for maximum throughput. The default of 1000 IOs per path can cause micro bursts of saturation and limit throughput. I've done a fair amount of testing and found that the best setting is to actually change paths based on the number of bytes sent per path. The reasoning is that it can be detrimental to change paths too often for small block IO. Setting the path change to bytes optimizes for both.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here is a real world example from a demo I recently conducted. This was done with 4 1G interfaces on both the Nimble array and the ESX host connected through a Cisco 3750X stack.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;SQLIO prior to path optimization :&lt;/P&gt;&lt;TABLE border="0" cellpadding="0" cellspacing="0" height="149" jive-data-cell="{&amp;quot;color&amp;quot;:&amp;quot;#575757&amp;quot;,&amp;quot;textAlign&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;padding&amp;quot;:&amp;quot;NaN&amp;quot;,&amp;quot;backgroundColor&amp;quot;:&amp;quot;transparent&amp;quot;,&amp;quot;fontFamily&amp;quot;:&amp;quot;arial,helvetica,sans-serif&amp;quot;,&amp;quot;verticalAlign&amp;quot;:&amp;quot;baseline&amp;quot;}" jive-data-header="{&amp;quot;color&amp;quot;:&amp;quot;#FFFFFF&amp;quot;,&amp;quot;backgroundColor&amp;quot;:&amp;quot;#6690BC&amp;quot;,&amp;quot;textAlign&amp;quot;:&amp;quot;center&amp;quot;,&amp;quot;padding&amp;quot;:&amp;quot;NaN&amp;quot;,&amp;quot;fontFamily&amp;quot;:&amp;quot;arial,helvetica,sans-serif&amp;quot;,&amp;quot;verticalAlign&amp;quot;:&amp;quot;baseline&amp;quot;}" style="width: 835px; height: 152px;"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline; width: 48px;"&gt;Server&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Tool&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Test Description&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;IO/s&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;MB/s&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Avg. Latency&lt;/TH&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Random 8k Writes, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;12375&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;97&lt;/TD&gt;&lt;TD class="xl66" width="65"&gt;4ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Random 8k Reads, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;14456&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;113&lt;/TD&gt;&lt;TD class="xl66" width="65"&gt;3ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;2130&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;133&lt;/TD&gt;&lt;TD class="xl66" width="65"&gt;29ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;2147&lt;/TD&gt;&lt;TD align="right" class="xl67" width="65"&gt;134&lt;/TD&gt;&lt;TD class="xl66" width="65"&gt;29ms&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;SQLIO after path optimization :&lt;/P&gt;&lt;TABLE border="0" cellpadding="0" cellspacing="0" height="140" jive-data-cell="{&amp;quot;color&amp;quot;:&amp;quot;#575757&amp;quot;,&amp;quot;textAlign&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;padding&amp;quot;:&amp;quot;NaN&amp;quot;,&amp;quot;backgroundColor&amp;quot;:&amp;quot;transparent&amp;quot;,&amp;quot;fontFamily&amp;quot;:&amp;quot;arial,helvetica,sans-serif&amp;quot;,&amp;quot;verticalAlign&amp;quot;:&amp;quot;baseline&amp;quot;}" jive-data-header="{&amp;quot;color&amp;quot;:&amp;quot;#FFFFFF&amp;quot;,&amp;quot;backgroundColor&amp;quot;:&amp;quot;#6690BC&amp;quot;,&amp;quot;textAlign&amp;quot;:&amp;quot;center&amp;quot;,&amp;quot;padding&amp;quot;:&amp;quot;NaN&amp;quot;,&amp;quot;fontFamily&amp;quot;:&amp;quot;arial,helvetica,sans-serif&amp;quot;,&amp;quot;verticalAlign&amp;quot;:&amp;quot;baseline&amp;quot;}" style="width: 838px; height: 142px;" width="836"&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline; width: 49px;"&gt;Server&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Tool&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Test Description&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;IO/s&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;MB/s&lt;/TH&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center;"&gt;Avg. Latency&lt;/TH&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Random 8k Writes, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;26882&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;210&lt;/TD&gt;&lt;TD class="xl67" style="background-color: #ffff00;" width="65"&gt;1ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Random 8k Reads, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;28964&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;226&lt;/TD&gt;&lt;TD class="xl67" style="background-color: #ffff00;" width="65"&gt;1ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Sequential 64k Writes, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;7524&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;470&lt;/TD&gt;&lt;TD class="xl67" style="background-color: #ffff00;" width="65"&gt;8ms&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TH style="color: #ffffff; background-color: #6690bc; text-align: center; font-family: arial, helvetica, sans-serif; vertical-align: baseline;"&gt;SQL-06&lt;/TH&gt;&lt;TD class="xl65" width="65"&gt;SQLIO&lt;/TD&gt;&lt;TD class="xl65" width="385"&gt;Sequential 64k Reads, 8 threads with 8 qdepth for 120 sec&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;7474&lt;/TD&gt;&lt;TD align="right" class="xl68" style="background-color: #ffff00;" width="65"&gt;467&lt;/TD&gt;&lt;TD class="xl67" style="background-color: #ffff00;" width="65"&gt;8ms&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Notice the large improvement in not only throughput but also the reduction in latency. The latency in the first test was due to the saturation of the 1G links.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The optimization is done with the following command from the ESX 5.x console:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: 'andale mono', times;"&gt;for i in `esxcli storage nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: 'andale mono', times;"&gt;&amp;nbsp; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 0 -t iops;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: 'andale mono', times;"&gt;done&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This can also be done with a PowerShell script that is posted here :&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.hpe.com/docs/DOC-1112"&gt;Set VMware RoundRobin PSP through PowerCLI&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 12 Mar 2013 15:23:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983851#M998</guid>
      <dc:creator>aherbert23</dc:creator>
      <dc:date>2013-03-12T15:23:54Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983852#M999</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Why did you choose "&lt;SPAN style="font-family: 'andale mono', times;"&gt;-B 262144" as the bytes size, what other options/results did you test with?&amp;nbsp; I assume this was the best overall balance between IOPS and Throughput?&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 12 Mar 2013 17:44:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983852#M999</guid>
      <dc:creator>pdavies34</dc:creator>
      <dc:date>2013-03-12T17:44:55Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983853#M1000</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I did try several other options. Setting it to 256K per path seemed to be the sweet spot.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 12 Mar 2013 22:45:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983853#M1000</guid>
      <dc:creator>aherbert23</dc:creator>
      <dc:date>2013-03-12T22:45:47Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983854#M1001</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;This is great &lt;A href="https://community.hpe.com/u1/2036"&gt;Adam Herbert&lt;/A&gt; greatly appreciated.&amp;nbsp; I have seen this in a couple installs now, and will put this in my toolbox of changes for 1G installs.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 13 Mar 2013 05:00:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983854#M1001</guid>
      <dc:creator>mandersen81</dc:creator>
      <dc:date>2013-03-13T05:00:36Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983855#M1002</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;really terrific and important post &lt;/P&gt;&lt;P&gt;&lt;SPAN class="j-post-author"&gt;&lt;STRONG&gt;&lt;A _jive_internal="true" data-avatarid="-1" data-userid="2036" data-username="aherbert" href="https://community.hpe.com/people/aherbert"&gt;Adam Herbert&lt;/A&gt;&lt;/STRONG&gt;&lt;/SPAN&gt; Thanks for this.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 03:02:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983855#M1002</guid>
      <dc:creator>dhamilton113</dc:creator>
      <dc:date>2013-03-14T03:02:21Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983856#M1003</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Can this command be run on an environment that already has VMs on the iSCSI LUNs without an interruption of service?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 16:45:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983856#M1003</guid>
      <dc:creator />
      <dc:date>2013-03-14T16:45:08Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983857#M1004</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, it has an immediate effect and the way the script is written it only affects Nimble Volumes.&lt;/P&gt;&lt;P&gt;It is also relatively simple to modify the script to only affect an individual volume if you so wished.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Phil&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 16:50:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983857#M1004</guid>
      <dc:creator>pdavies34</dc:creator>
      <dc:date>2013-03-14T16:50:31Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983858#M1005</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, it is safe to run. No downtime needed.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 16:50:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983858#M1005</guid>
      <dc:creator>aherbert23</dc:creator>
      <dc:date>2013-03-14T16:50:59Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983859#M1006</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;have you guys tried iops=0, this essentially ignore the # of IOPS per path before switching, and relies on queue depth.&amp;nbsp; Essentially poor man's LQD on ESX!&amp;nbsp; We are trying to do some testing in tech-marketing lab to get some results.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 18:02:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983859#M1006</guid>
      <dc:creator>wen35</dc:creator>
      <dc:date>2013-03-14T18:02:07Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983860#M1007</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes setting the policy=iops with both iops=0 and bytes=0 may give better performance, where MPIO doesn't need to wait for 256K before switching path.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 14 Mar 2013 20:40:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983860#M1007</guid>
      <dc:creator>jwang131</dc:creator>
      <dc:date>2013-03-14T20:40:15Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983861#M1008</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I would be interested in seeing the results of the tests. When I tried using low IOPS per path numbers I saw small block random performance degrade. I did not try setting IOPS per path to 0. I didn't even know that would be a valid input! &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 15 Mar 2013 02:18:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983861#M1008</guid>
      <dc:creator>aherbert23</dc:creator>
      <dc:date>2013-03-15T02:18:21Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983862#M1009</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;These settings apply/benefit 10Gb?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Craig&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Apr 2013 16:21:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983862#M1009</guid>
      <dc:creator />
      <dc:date>2013-04-10T16:21:15Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983863#M1010</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, definitely.&amp;nbsp; Assuming you have dual 10G bind to the iSCSI initiatior, PSP_RR will leverage both paths without having to wait for a given path to reach X IOPS or X bytes before the switch.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 10 Apr 2013 17:04:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983863#M1010</guid>
      <dc:creator>wen35</dc:creator>
      <dc:date>2013-04-10T17:04:46Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983864#M1011</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Is this (or iops=0/bytes=0) going to end up in the Nimble VMware installation guide?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 18 Apr 2013 17:06:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983864#M1011</guid>
      <dc:creator>epedersen22</dc:creator>
      <dc:date>2013-04-18T17:06:29Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983865#M1012</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;it is making its way to the next edition of the vsphere on Nimble best practices guide - the same will go into the next edition of the vmware integration guide as well.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 18 Apr 2013 17:55:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983865#M1012</guid>
      <dc:creator>wen35</dc:creator>
      <dc:date>2013-04-18T17:55:17Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983866#M1013</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Cool - thanks!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 18 Apr 2013 22:09:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983866#M1013</guid>
      <dc:creator>epedersen22</dc:creator>
      <dc:date>2013-04-18T22:09:44Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983867#M1014</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Made the change.&amp;nbsp; My SQLIO test throughput on 10G on Reads went from ~470 MB/s to ~715 MB/s.&amp;nbsp; Have a CS240G.&amp;nbsp; NICE!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Craig&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 20 Apr 2013 21:32:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983867#M1014</guid>
      <dc:creator />
      <dc:date>2013-04-20T21:32:30Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983868#M1015</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;wow nice to hear Craig!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 22 Apr 2013 17:13:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983868#M1015</guid>
      <dc:creator>wen35</dc:creator>
      <dc:date>2013-04-22T17:13:31Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983869#M1016</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I may be wrong, but when making the change to --iops=0 &amp;amp; --bytes=0, it looks like you have to set '--type' to 'iops'.&amp;nbsp; I tried it using '--type=bytes' as written in the script above, but the iops limit didn't change.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Result when run with --type=bytes:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: eui.xxx&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;IOOperation Limit: 1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Limit Type: Bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Use Active Unoptimized Paths: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Byte Limit: 0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;After, when run with --type=iops:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: eui.xxx&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;SPAN style="text-decoration: underline;"&gt;IOOperation Limit: 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Limit Type: Bytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Use Active Unoptimized Paths: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Byte Limit: 0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;From the help text:&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; -t|--type=&amp;lt;str&amp;gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the type of the Round Robin path switching that should be enabled for this device.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Valid values for type are:&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; bytes:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching based on the number of bytes sent down a path.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; default:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching back to default values.&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; iops:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Set the trigger for path switching based on the number of I/O operations on a path.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Apr 2013 23:40:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983869#M1016</guid>
      <dc:creator>epedersen22</dc:creator>
      <dc:date>2013-04-25T23:40:59Z</dc:date>
    </item>
    <item>
      <title>Re: Importance of path change settings in VMware</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983870#M1017</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I just ran the command twice, one to set bytes, and then one to set IOPS.&amp;nbsp; Since there is a Limit Type, I'm not sure if it matters if you changed bytes to 0 if the Limit Type is set to Iops.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My notes from another post:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In the SSH console on ESXi 5.1, this command will loop through each datastore setting Bytes to 0, IOPS to 0 and then display the current settings. For some reason, when listing disks, they show up twice, once with their regular ID and a second time with the ID ending in :1 and the settings can't be applied.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 0; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 0 ;esxcli storage nmp psp roundrobin deviceconfig get -d $i; done&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If you want it set back to defaults how it came out of the box, this will reset Bytes to 10485760, IOPS to 1000,&amp;nbsp; and Type to default&lt;/P&gt;&lt;P&gt;for i in `ls /vmfs/devices/disks/ | grep eui.` ; do echo $i ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t bytes -B 10485760; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t iops -I 1000 ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -t default; esxcli storage nmp psp roundrobin deviceconfig get -d $i; done&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 03 May 2013 16:25:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/importance-of-path-change-settings-in-vmware/m-p/6983870#M1017</guid>
      <dc:creator>mallocarray12</dc:creator>
      <dc:date>2013-05-03T16:25:03Z</dc:date>
    </item>
  </channel>
</rss>

