<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MPIO paths in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800541#M5705</link>
    <description>&lt;P&gt;As Jay mentioned, you're seeing the paths to each gateway connection.&amp;nbsp;&amp;nbsp;You can obtain some additional performance by changing the&amp;nbsp;path selection&amp;nbsp;policy to IOPS and changing the number of consecutive IOPS per path to something lower than the default, which is 1000.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To get a list of devices use:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;ls /vmfs/devices/disks/naa.6000eb*&lt;/EM&gt; (I think most HP volumes will start with naa.6000eb)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To get the path selection policy settings:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;esxcli storage nmp psp roundrobin deviceconfig get --device=&amp;lt;x&amp;gt; &lt;/EM&gt;(Where X is obtained from above)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To set the path selection policy to IOPS and specify the numbe of consecutive IOPS per path use:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=&amp;lt;X&amp;gt; --device=&amp;lt;X&amp;gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The best number to use for consecutive IOPS is open for debate, but I settled on 3. I never noticed much of a difference for any value between 1 and 64.&lt;/P&gt;</description>
    <pubDate>Tue, 11 Sep 2012 14:13:50 GMT</pubDate>
    <dc:creator>5y53ng</dc:creator>
    <dc:date>2012-09-11T14:13:50Z</dc:date>
    <item>
      <title>MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5799695#M5691</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My test setup:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2x ESXi 5 Hosts&lt;/P&gt;&lt;P&gt;2x P4500 Nodes&lt;/P&gt;&lt;P&gt;2x Dedicated iscsi switches&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Each node has one nic connected to each switch, each host has one nic connected to each switch for a total of 2 icsci pnic's.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Created the vswitch with two vmkernels for two iscsi connections, and added the two nics under the software iscsi adapter settings....&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Changed the datastore pathing to round robin, and all is well. &amp;nbsp;Tested some traffic and the esx host is perfectly balancing the load between the two nics.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now for the problem. &amp;nbsp;When looking at the paths available in the screen where you select round robin it shows two paths to that particular lun, but both paths are to the same physical san node. &amp;nbsp;This results in only 1gbps throughput for this entire lun because it is only using the one nic from the one node. &amp;nbsp;Created a second volume on the san, and looking at that datastore's path on esx shows the same thing except now its to the other san node.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does the volume really only reside on one node? &amp;nbsp;Or Am I doing something wrong with the esx setup?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Dan.&lt;/P&gt;</description>
      <pubDate>Mon, 10 Sep 2012 21:05:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5799695#M5691</guid>
      <dc:creator>danletkeman</dc:creator>
      <dc:date>2012-09-10T21:05:08Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800259#M5698</link>
      <description>&lt;P&gt;hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;can you tell us the type of the volume you created ( networkraid 10 or not)?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;regards&lt;/P&gt;</description>
      <pubDate>Tue, 11 Sep 2012 08:21:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800259#M5698</guid>
      <dc:creator>gerance</dc:creator>
      <dc:date>2012-09-11T08:21:28Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800453#M5703</link>
      <description>&lt;P&gt;That is how it works.&amp;nbsp; VMware does not have the ability to connect to every node for every&amp;nbsp;LUN like the DSM for Windows.&amp;nbsp; It doesn't matter if you use MPIO, Network RAID-10 or Network RAID-0; it still only talks through a single gateway node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When I setup my LH SAN, I created 1 datastore for each LH node and manually load-balanced the system.&amp;nbsp; &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you are setup for Network RAID-10 and you have two P4500's, then the complete volume exists on both.&amp;nbsp; Even though Vmware is only talking to a single gateway node, if that node fails, the other P4500 will take over.&amp;nbsp; It takes about 10 to 15 seconds for the failover to complete.&amp;nbsp; Once the original P4500 is back online and restriped, it will resume the duties of the gateway (on 9.0 and higher)&lt;/P&gt;</description>
      <pubDate>Tue, 11 Sep 2012 13:09:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800453#M5703</guid>
      <dc:creator>Jay Cardin</dc:creator>
      <dc:date>2012-09-11T13:09:39Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800541#M5705</link>
      <description>&lt;P&gt;As Jay mentioned, you're seeing the paths to each gateway connection.&amp;nbsp;&amp;nbsp;You can obtain some additional performance by changing the&amp;nbsp;path selection&amp;nbsp;policy to IOPS and changing the number of consecutive IOPS per path to something lower than the default, which is 1000.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To get a list of devices use:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;ls /vmfs/devices/disks/naa.6000eb*&lt;/EM&gt; (I think most HP volumes will start with naa.6000eb)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To get the path selection policy settings:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;esxcli storage nmp psp roundrobin deviceconfig get --device=&amp;lt;x&amp;gt; &lt;/EM&gt;(Where X is obtained from above)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To set the path selection policy to IOPS and specify the numbe of consecutive IOPS per path use:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops=&amp;lt;X&amp;gt; --device=&amp;lt;X&amp;gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The best number to use for consecutive IOPS is open for debate, but I settled on 3. I never noticed much of a difference for any value between 1 and 64.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Sep 2012 14:13:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800541#M5705</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-09-11T14:13:50Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800929#M5706</link>
      <description>&lt;P&gt;Network raid 10 volume.&lt;/P&gt;</description>
      <pubDate>Tue, 11 Sep 2012 21:46:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800929#M5706</guid>
      <dc:creator>danletkeman</dc:creator>
      <dc:date>2012-09-11T21:46:21Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800933#M5707</link>
      <description>&lt;P&gt;I'll have to try changing the IOPS policy. &amp;nbsp;With the default IOPS policy it looks like I can only get about 1GBPS to the node from a host with two vmkernel's. &amp;nbsp;Will chaning the IOPS policy increase this speed? &amp;nbsp;Because it seems as if all of the traffic is going from the esx host to one nic on one node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also have an odd problem with the traffic going from the esx host to the san is going through the trunk port on the switches instead of going directly to the san. &amp;nbsp;Called hp support and they were stumpped.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Eg:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Each node is connected to each switch, but this is how the traffic flows:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; -------vmk2------switch2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When it should flow like this:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Esx host ---vmk1 -----switch1-----node1(gateway node for lun1)&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; -------vmk2------switch2--------&lt;/P&gt;</description>
      <pubDate>Tue, 11 Sep 2012 21:53:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5800933#M5707</guid>
      <dc:creator>danletkeman</dc:creator>
      <dc:date>2012-09-11T21:53:01Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801657#M5716</link>
      <description>&lt;P&gt;Could you just disallow the iSCSI VLAN on the trunk between the switches?&lt;/P&gt;</description>
      <pubDate>Wed, 12 Sep 2012 17:23:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801657#M5716</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-09-12T17:23:57Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801673#M5717</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The desribed behaviour is exactly how TCP over ethernet works and nothing can be done - it is all hardcoded down to ethernet working logic. IP address is maped to ethernet MAC address and this is 1-to-1 mapping. So when P4500 node trunks two NICs to logical ALB interface, it still announces itself to network via one interface and all incoming traffic goes to node via this interface. Second interface on ALB trunk can be (and is) used for outgoing traffic only.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Gediminas&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 12 Sep 2012 17:52:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801673#M5717</guid>
      <dc:creator>Gediminas Vilutis</dc:creator>
      <dc:date>2012-09-12T17:52:06Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801793#M5718</link>
      <description>No you cannot disallow the iscsi vlan on the trunk port. You loose quarum.</description>
      <pubDate>Wed, 12 Sep 2012 21:18:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5801793#M5718</guid>
      <dc:creator>danletkeman</dc:creator>
      <dc:date>2012-09-12T21:18:30Z</dc:date>
    </item>
    <item>
      <title>Re: MPIO paths</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5803013#M5724</link>
      <description>&lt;P&gt;Ya know... I&amp;nbsp;&amp;nbsp;really&amp;nbsp;missed the obvious there. Sorry about that.&lt;/P&gt;</description>
      <pubDate>Thu, 13 Sep 2012 21:16:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/mpio-paths/m-p/5803013#M5724</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-09-13T21:16:36Z</dc:date>
    </item>
  </channel>
</rss>

