<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Quad port MPIO performance with vSphere in Array Performance and Data Protection</title>
    <link>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985027#M805</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I couldn't seem to find much around this so hopefully someone out there can help me.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am trying to find out whether I will get a higher aggregated throughput between my ESX hosts and Nimble storage if I use four 1GbE nic ports on each and configure all four ports in ESX for MPIO (instead of two).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I understand that when using iSCSI within vSphere a single iSCSI connection can't use more than one path at a time but with round robin, etc, has anyone seen an overall higher performance in this configuration? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It seems to be a configuration that isn't very common as a lot of people just move to 2 x 10GbE but in my case for this particular use-case I would struggle to justify the extra cost.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help would be much appreciated!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Ben&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 13 Mar 2014 00:51:44 GMT</pubDate>
    <dc:creator>BenLoveday</dc:creator>
    <dc:date>2014-03-13T00:51:44Z</dc:date>
    <item>
      <title>Quad port MPIO performance with vSphere</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985027#M805</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I couldn't seem to find much around this so hopefully someone out there can help me.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am trying to find out whether I will get a higher aggregated throughput between my ESX hosts and Nimble storage if I use four 1GbE nic ports on each and configure all four ports in ESX for MPIO (instead of two).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I understand that when using iSCSI within vSphere a single iSCSI connection can't use more than one path at a time but with round robin, etc, has anyone seen an overall higher performance in this configuration? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It seems to be a configuration that isn't very common as a lot of people just move to 2 x 10GbE but in my case for this particular use-case I would struggle to justify the extra cost.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help would be much appreciated!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Ben&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 13 Mar 2014 00:51:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985027#M805</guid>
      <dc:creator>BenLoveday</dc:creator>
      <dc:date>2014-03-13T00:51:44Z</dc:date>
    </item>
    <item>
      <title>Re: Quad port MPIO performance with vSphere</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985028#M806</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ben,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You will see higher aggregated throughput between ESX hosts and Nimble should you dedicate additional NICs for iSCSI &lt;STRONG&gt;AND &lt;/STRONG&gt;if you are &lt;SPAN style="text-decoration: underline;"&gt;throughput bound&lt;/SPAN&gt; by your existing 2 x 1GbE connections &lt;STRONG&gt;AND&lt;/STRONG&gt; assuming you have at least 4 x 1Gbps iSCSI ports on Nimble.&amp;nbsp; You can check in vCenter whether the VMNICs for iSCSI are saturated.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Throughput = IOPS x block size.&amp;nbsp; E.g. 10,000 IOPS x 8KB block = ~80MB/s.&lt;/P&gt;&lt;P&gt;Unless you have either a high IOP or large sequential workloads in the VMs on the host, you may not be saturating the 2 x 1GbE therefore adding additional host NIC ports for iSCSI will provide no additional performnce.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope this helps.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Eddie&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 13 Mar 2014 01:36:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985028#M806</guid>
      <dc:creator>etang40</dc:creator>
      <dc:date>2014-03-13T01:36:18Z</dc:date>
    </item>
    <item>
      <title>Re: Quad port MPIO performance with vSphere</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985029#M807</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks Eddie,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;That does help, I just wanted to check that from an iSCSI perspective within vSphere I can leverage those additional nics. It's mostly to do with backup and restore throughput which would be largely sequential.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks again!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Ben&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 13 Mar 2014 03:06:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/quad-port-mpio-performance-with-vsphere/m-p/6985029#M807</guid>
      <dc:creator>BenLoveday</dc:creator>
      <dc:date>2014-03-13T03:06:24Z</dc:date>
    </item>
  </channel>
</rss>

