<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance issues when using VSA on ESX with VMXNET3 driver in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/6700898#M8968</link>
    <description>&lt;P&gt;I feel compelled to contribute since we have experienced the same latency issues using VSA 11.5 / ESXi 5.5 u2. Similar to other contributors experiences in this discussion, the latency seems to occur whenever the VSA cluster is accessed via a local gateway VSA node, thus requiring iSCSI traffic to pass through the local ESXi vSwitch network stack. Accessing the cluster via a remote VSA gateway on another host shows good performance in contrast. The issue would seem to be that having your VSA node sharing the same local vSwitch as your iSCSI vmk ports, introduces the latency if you are accessing a VSA presented datastore that the VSA cluster&amp;nbsp;has determined should be presented by the same local VSA node on the same vSwitch.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This infers&amp;nbsp;that it is as likely to be a&amp;nbsp;hypervisor&amp;nbsp;network stack performance issue as a VSA cluster issue.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Our set-up:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2 x HP DL380 Gen8's; local 15K SAS HDD Storage Array;&amp;nbsp;vSphere ESXi 5.5 u2 (HP Build)&lt;/P&gt;&lt;P&gt;2 v HP VSA 10TB v11.5; Software iSCSI Adapter; Standard twin path iSCSI Initiator configuration.&lt;/P&gt;&lt;P&gt;Network is 10GbE with Jumbo Frames (9000MTU). Throughput to non-local VSA node is around 3-400MB/s @ &amp;lt;20ms latency. Throughput to local VSA node is around 1-200MB/s with&amp;nbsp;&amp;gt;1000ms latency spikes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The VSA paths tend to settle on a pattern where one particular volume / datastore presented by the cluster VIP is always mapped to a local VSA on a particular host. This is desirable since this offers load balancing between VSA’s. However, often this will mean that VSA Datastore 1 being accessed by ESXi Host 1&amp;nbsp;via its local VSA, and VSA Datastore 2 is being accessed by ESXi Host 2&amp;nbsp;via its local VSA respectively.&amp;nbsp;Storage degradation is then&amp;nbsp;experienced by ESXi Host 1&amp;nbsp;on VSA Datastore 1 (local) but not on VSA Datastore 2 (remotely accessed via pNIC / Switch), and vice versa.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Running various storage performance tools, it seems that the throughput / latency to the local VSA node begins acceptably, but as you ramp up the test data it suddenly seems to become saturated wherby latency goes through the roof. Using Round Robin Path Policy at iops 1 or default 1000 gives very good storage performance on the non-local VSA, but abysmal performance on the local VSA. Defaulting to Most recently Used Path Policy gives poorer but acceptable performance on the non-local VSA, and poor&amp;nbsp;performance on the local VSA, however latency seems to remain just within acceptable tolerances - still spiking&amp;nbsp;occasionally to several hundred&amp;nbsp;ms, but averaging between 20-30ms. The inference perhaps is that the lower throughput / path switching reduces the frequency of the saturation of the local hypervisor network stack with iSCSI traffic passing between a local target and initiator.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As suggested here already in this dicussion, the&amp;nbsp;solution would seem to be to separate out the VSA and the iSCSI Software Initiator vmk's, however we have no more pNIC's to offer each ESXi Node at the moment and 10Gbe cards and switch modules are expensive!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope all this helps someone.&lt;/P&gt;</description>
    <pubDate>Wed, 28 Jan 2015 10:11:34 GMT</pubDate>
    <dc:creator>Princes</dc:creator>
    <dc:date>2015-01-28T10:11:34Z</dc:date>
    <item>
      <title>Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5498835#M4405</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I want to share a big performance issue with you.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Currently there is a big problem when using HP P4000 VSA's on VMWare when using VMXNET3 driver.&lt;/P&gt;&lt;P&gt;When the VSA is colocated on a ESX server with other VM's and the gateway node of a SAN volume is the locally hosted VSA node then there is a huge performance problem when the ESX server itselve uses the volume (for example deleting a snapshot)&lt;/P&gt;&lt;P&gt;Latency of the volume goes sky high (300+ms) and IO's are very slow.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;VMWare also ackowledges this problem. There seems to be a problem with the TSO of the VMXNET3 driver which is being bypassed by the ESX server which causes severe performance degradation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When you change the VMXNET3 driver of the VSA to E1000 the problem is solved, however i'm still waiting on a reply of HP if using E1000 is supported.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I''ll keep you updated&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 09:51:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5498835#M4405</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-17T09:51:26Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499399#M4406</link>
      <description>&lt;P&gt;Please keep us posted on this issue.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;We are experiencing something similar.&lt;/P&gt;&lt;P&gt;Config is two VSA 9.5 with flexible nic's on DL380 G7's with ESXi 4.1 U1.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The cluster can perform normally for some time but then suddenly one of the ESX hosts&amp;nbsp;experiences heavy write latency (150ms+) to it's local disk. Due to the network raid 10 this affects the whole P4000 cluster.&lt;/P&gt;&lt;P&gt;Only way to restore performance is to shut down the bad performing VSA node and reboot the ESXi server.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Strange with our case is that the local disk performance is affected even after shutting down the node.&lt;/P&gt;&lt;P&gt;Adding a local disk&amp;nbsp;on the VSA datastore&amp;nbsp;to a virtual machine still shows bad write latency.&lt;/P&gt;&lt;P&gt;This leads me to believe that the write cache got disabled for some reason&amp;nbsp;but the hardware status makes no mention of this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sounds like a hardware issue but we have seen the&amp;nbsp;local write latency happen&amp;nbsp;on both servers.&lt;/P&gt;&lt;P&gt;Firmware level is up to date with latest firmware DVD 9.30.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your story makes me reconsider the vmxnet3 driver as a suspect.&lt;/P&gt;&lt;P&gt;The nic is configured as flexible but in the kernel.log of the VSA the vmxnet3 driver is mentioned:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Jan 16 09:19:09 vsa2 kernel: VMware vmxnet virtual NIC driver&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: GSI 18 sharing vector 0xB9 and IRQ 18&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: ACPI: PCI Interrupt 0000:00:12.0[A] -&amp;gt; GSI 19 (level, low) -&amp;gt; IRQ 185&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: Found vmxnet/PCI at 0x14a4, irq 185.&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: features: ipCsum zeroCopy partialHeaderCopy&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: numRxBuffers = 100, numRxBuffers2 = 1&lt;BR /&gt;Jan 16 09:19:09 vsa2 kernel: VMware vmxnet3 virtual NIC driver - version 1.0.11.1-NAPI&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 14:11:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499399#M4406</guid>
      <dc:creator>Wvd</dc:creator>
      <dc:date>2012-01-17T14:11:10Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499461#M4407</link>
      <description>&lt;P&gt;Just got off the phone with HP support.&lt;/P&gt;&lt;P&gt;The E1000 driver is officially not supported by HP. But HP support advices me to use it when it's performing better in our case ?!?!?!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;They wont investigate the problem further cause in their opinion it's a vmware problem and should fix it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm&amp;nbsp;awaiting further&amp;nbsp;information from VMWare.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In meanwhile my opinion is that the Lefthand VSA is cripled for this moment and should not being used on ESX servers with VM's&amp;nbsp;locally hosted on the same server until this problem is fixed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Using Flexible interfaces doesn't show the extreme behaviour as VMXNET3 but it also shows weird latencies&amp;nbsp;some times.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tested this on several different hardware and all the same problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;VMWare mentions also the following possible workaround : Create&amp;nbsp;a seperate vSwitch on which you connect only the VSA, but this option needs additional hardware NICS. This way the TSO of the VMXNET3 driver wont be bypassed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 14:56:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499461#M4407</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-17T14:56:18Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499475#M4408</link>
      <description>&lt;P&gt;"This way the TSO of the VMXNET3 driver wont be bypassed."&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What is the "TSO"?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also - best practice already dictates that a seperate vswitch be used for the VSA/iSCSI traffic - that should be no burden! If you are planning a virtual host, you need to incorporate enough interfaces for the host storage access and guest communication, but ...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am not certain that I understand why/how having a seperate vswitch for the VSAs prevents the VMXnet3 "TSO bypass" - could you help me understand what's going on?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 15:01:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499475#M4408</guid>
      <dc:creator>Tedh256</dc:creator>
      <dc:date>2012-01-17T15:01:02Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499503#M4409</link>
      <description>&lt;P&gt;TSO = CheckSum Offloading. So checksum calculations are done by hardware (NIC)&amp;nbsp;instead of CPU&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;iSCSI traffic should always be on a seperate vSwitch indeed. But VMWare meant a seperate vSwitch for the VSA and a seperate vSwitch for the VMKernel iSCSI network. So whe you als&amp;nbsp;want redundancy you need 4 physical NICS this way. 2 for each vSwitch.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When using two vSwitches VMWare uses a different path internally to communicate and this way TSO&amp;nbsp;of the VMXNET3 driver could function properly. (I didn't tested this possible workaround however!)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 15:12:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499503#M4409</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-17T15:12:39Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499509#M4410</link>
      <description>&lt;P&gt;huh&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;but this problem only applies to situations where you are running VMs (other than the VSA VMs themselves, I presume?) on local storage?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Why would you want to do that - if these hosts are running VSAs wouldn't you simply use up all local storage so that it can be presented as shared storage?&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 15:17:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499509#M4410</guid>
      <dc:creator>Tedh256</dc:creator>
      <dc:date>2012-01-17T15:17:55Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499549#M4412</link>
      <description>&lt;P&gt;No, all local storage is used by the HP VSA and is provided as a iSCSI volume/datastore to ESX servers. (used in small enterprises)&lt;/P&gt;&lt;P&gt;When you have VM's hosted on the same ESX node as which the VSA (which is used as gateway node for the volume) is hosted then you have this problem when for example deleting a vmware snapshot. (All cases in which the ESX node itselve communicates with the datastore)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Traffic from within VM's to the datastore dont suffer this problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So local storage of the server is only being used by the HP VSA.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Just test it for yourselve:&lt;/P&gt;&lt;P&gt;Deploy a VSA (with VMXNET3 nic) on a single ESXi 4.1 server&lt;/P&gt;&lt;P&gt;Create a volume on the VSA and create a vmware datastore on it&lt;/P&gt;&lt;P&gt;Now let the ESXi server perform some traffic on the datastore by commiting a snapshot or a much easier way:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Execute the following command from an SSH shel on the ESXi node&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;# cd /vmfs/volumes/[datastorename goes here]&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;# time dd if=/dev/zero of=testfile count=102400 bs=1024&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This command creates a 100MB testfile on the datastore. Creating a 100MB file should be a matter of 1-2 seconds!!! Times could go up to even minutes.&lt;/P&gt;&lt;P&gt;Also check the datastore read and write latency of the datastore from viclient/vcenter.. (200+ ms as soon as you start creating the file!)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2012 15:35:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5499549#M4412</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-17T15:35:05Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5501145#M4427</link>
      <description>&lt;P&gt;On my 9.0 VSAs the Nics are set to flexible. Why are you using VMNET3 anyway? Does it come standard on newer OVFs?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Jan 2012 20:15:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5501145#M4427</guid>
      <dc:creator>RonsDavis</dc:creator>
      <dc:date>2012-01-18T20:15:12Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5502295#M4439</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;FWIW --&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We saw similar symptoms a year or so ago, but it was reproducible with any virtual nic device and on both 10 GigE and 1 GigE networks.&amp;nbsp; With that said, perhaps it could be more prevalent with vmxnet3 or perhaps it was just a different issue altogether.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do you see this problem with vmxnet2?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In our case, the cause was thought to be due to vmkernel race and locking issues across the multiple vmdk layers.&amp;nbsp; It was most easily triggered with operations such as cloning, snapshots, and zeroing... but it wasn't reproducible-on-demand.&amp;nbsp; We changed all of our VSAs to use RDMs to the local storage instead of VMDKs-on-VMFS and the problems immediately disappeared.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;That was back with ESXi 4.x and VSAs at SAN/IQ 8.x.&amp;nbsp; We're now running ESXi 5.0 and San/IQ 9.5.&amp;nbsp; Some VSAs are using vmxnet2 -- no issues.&amp;nbsp; We haven't tried vmxnet3.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Using RDMs removes an "unnecessary" layer since the only thing on the datastore is the data VMDKs for the VSA anyway.&amp;nbsp; It may be quicker&amp;amp;easier for new administrators to setup a VSA by just setting up VMDKs on a VMFS, but it sounds like you're quite comfortable getting around the ESXi shell.&amp;nbsp; To create the RDMs, we used vmkfstools.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HTH&lt;/P&gt;</description>
      <pubDate>Thu, 19 Jan 2012 18:34:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5502295#M4439</guid>
      <dc:creator>virtualmatrix</dc:creator>
      <dc:date>2012-01-19T18:34:43Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503157#M4442</link>
      <description>&lt;P&gt;This is very interesting. I experienced this behavior as well, but I was unaware of the root cause. I witnessed extremely high latency numbers and poor throughput when I used the VMXNET3 adapter on my VSA. I could only clear the symptoms by rebooting the host. Since I was unable to explain the cause of the latency problems I abandonded further testing with the VMXNET3. When the VMXNET3 was working properly I did not see a significant performance increase over the flexible adapter anyway.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2012 14:15:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503157#M4442</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-01-20T14:15:28Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503165#M4443</link>
      <description>&lt;P&gt;When you deploy a VSA from the OVF it has two nics. One flexible and one VMXNET3.&lt;/P&gt;&lt;P&gt;I was told the flexible interface is for cases where management traffic is not possible over the same network as storage so you can delete one. VMXNET3 should offer better performance as the flexible so i always remove the flexible adapter.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The case at HP support didn't work out. HP's statement is that it's a VMware problem and they should solve it ?!?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The case at VMware is making progress. They are actively investigating this case. This morning i have performed several testing scenarios and collected logfiles for VMware to analyse.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'll keep this thread&amp;nbsp; updated with the progress&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2012 14:23:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503165#M4443</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-20T14:23:50Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503179#M4444</link>
      <description>HP is very obscure why there suddenly are two nic's in the VSA 9.5. OVF. This is very confusing and no clear statement exists which adapter should be used for iSCSI traffic. HP should make a definite statement and release a revised OVF with one adapter or at least one TYPE of adapter...&lt;BR /&gt;&lt;BR /&gt;Regarding performance issues, I am performing tests in my lab and seeing interesting results. Will post back soon...</description>
      <pubDate>Fri, 20 Jan 2012 14:30:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503179#M4444</guid>
      <dc:creator>Wvd</dc:creator>
      <dc:date>2012-01-20T14:30:03Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503183#M4445</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;M.Braak wrote:&amp;nbsp;&lt;P&gt;When using two vSwitches VMWare uses a different path internally to communicate and this way TSO&amp;nbsp;of the VMXNET3 driver could function properly. (I didn't tested this possible workaround however!)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Using two vswitches as described above, the iSCSI&amp;nbsp;traffic&amp;nbsp;must traverse&amp;nbsp;the physical network to reach the gateway VSA. This is strange, since we would expect to benefit from using the the VMXNET3 with traffic that remains on the vswitch and does not cross the physical network. I believed this to be where the VMXNET3 adapter would provide some benefit, but I guess it's time to refresh my memory and read up on the different types of virtual network adapters...&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2012 14:30:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503183#M4445</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-01-20T14:30:58Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503197#M4446</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1146568"&gt;@M.Braak&lt;/a&gt; wrote:&lt;BR /&gt;&lt;P&gt;When you deploy a VSA from the OVF it has two nics. One flexible and one VMXNET3.&lt;/P&gt;&lt;P&gt;I was told the flexible interface is for cases where management traffic is not possible over the same network as storage so you can delete one. VMXNET3 should offer better performance as the flexible so i always remove the flexible adapter.&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;I was never able to get the CMC to connect to the second NIC on any of my VSAs. The CMC would only connect to the NIC that was set as the SANiQ interface. When I would change the SANiQ interface, I was unable to reach the VIP on my iSCSI network. Is there something special you have to do in order to use the second NIC for management?&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2012 14:35:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503197#M4446</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-01-20T14:35:10Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503215#M4447</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1214179"&gt;@5y53ng&lt;/a&gt; wrote:&lt;BR /&gt;&lt;BLOCKQUOTE&gt;I was never able to get the CMC to connect to the second NIC on any of my VSAs. The CMC would only connect to the NIC that was set as the SANiQ interface. When I would change the SANiQ interface, I was unable to reach the VIP on my iSCSI network. Is there something special you have to do in order to use the second NIC for management?&lt;/BLOCKQUOTE&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;I never used two interfaces so i can't tell you but this is from the help function:&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;When configuring a management interface on a P4000 storage system, you must designate the storage interface as the SAN/iQ interface for that storage system in the CMC. This is done on the Communications tab in the TCP/IP configuration category for that storage system&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 20 Jan 2012 14:40:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503215#M4447</guid>
      <dc:creator>M.Braak</dc:creator>
      <dc:date>2012-01-20T14:40:09Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503777#M4448</link>
      <description>&lt;P&gt;I have finished my testing and have come to the following conclusion:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Any P4000 VSA 9.5&amp;nbsp;with a hardware virtual machine version above 4 is performance impaired.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have come to this conclusion by testing a lot of different configurations.&lt;/P&gt;&lt;P&gt;My test setup:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;DL380 G7 with 12 450GB 10K SAS disks in RAID10&lt;/P&gt;&lt;P&gt;HP 2910AL switches&lt;/P&gt;&lt;P&gt;Dedicated iSCSI network and adapters&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I&amp;nbsp;tried a lot of&amp;nbsp;different 9.5 VSA configurations but these are the most common:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;VSA 9.5&amp;nbsp;with flexible adapter&lt;/LI&gt;&lt;LI&gt;VSA 9.5 with VMXNET3&lt;/LI&gt;&lt;LI&gt;VSA 9.0 with flexible adapter and upgraded to 9.5&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Created a separate management group for all of them and created a volume that the ESXi server would connect to.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I created datastores on the volumes and deployed a clean&amp;nbsp;VSA 9.5 OVF&amp;nbsp;on each datastore. These would not be offering storage, they are just a quick test of virtual machine boot performance and latency to the datastores.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The results&lt;/STRONG&gt;:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Booting the virtual machine resulted in latency spikes to 200-300ms on all datastores except for the upgraded 9.0 to 9.5 VSA with flexible adapter. Here latency never went above 5ms.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I also timed the boot to verify if performance was indeed impacted.&lt;/P&gt;&lt;P&gt;The upgraded VSA 9.0 to 9.5 booted the virtual machine&amp;nbsp;in 1min 10sec&lt;/P&gt;&lt;P&gt;All other booted in 1min 30sec&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The difference between an upgraded 9.0 to 9.5 and a newly deployed VSA 9.5 lies mainly in the fact that the upgraded VSA stays on VM hardware version 4.&lt;/P&gt;&lt;P&gt;My suspicion was confirmed when I upgraded the hardware version and the boot time of the test VM immediately went to 1m30s and high latency appeared.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As a final&amp;nbsp;test I also tried using a raw device mapping to local storage&amp;nbsp;on a new VSA 9.5&amp;nbsp;as mentioned in this thread. This improved performance, boot time went to 1m15s but latency was still too high and spikey.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;These tests were performed on ESXi 4.1 U1 and 5.0. It made no difference.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am definitely keeping my VSA's on HW version 4.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 21 Jan 2012 16:29:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5503777#M4448</guid>
      <dc:creator>Wvd</dc:creator>
      <dc:date>2012-01-21T16:29:21Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5508919#M4496</link>
      <description>&lt;P&gt;How could you keep the VM hardware version 4 if you deploy the 9.5 vsa ovf to ESXi 5?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since I am testing a 6 nodes P4000 VSA with ESXi 5, the VM hardware version is 7.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am also experiencing crappy performance issue, I have 6 nodes in the cluster, each node has 5 disks, with VMXNET3 and one 10G port for vSwitch uplink. I only get about 30MB/s throughput.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jan 2012 04:15:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5508919#M4496</guid>
      <dc:creator>yaodongxian</dc:creator>
      <dc:date>2012-01-26T04:15:35Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509117#M4497</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1278451"&gt;@yaodongxian&lt;/a&gt; wrote:&lt;BR /&gt;&lt;P&gt;How could you keep the VM hardware version 4 if you deploy the 9.5 vsa ovf to ESXi 5?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Only way is to deploy an older 9.0 VSA ovf&amp;nbsp;and then use the CMC to upgrade it to 9.5.&lt;/P&gt;&lt;P&gt;Keep us posted on the results...&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jan 2012 10:12:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509117#M4497</guid>
      <dc:creator>Wvd</dc:creator>
      <dc:date>2012-01-26T10:12:11Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509415#M4501</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1274995"&gt;@Wvd&lt;/a&gt; wrote:&lt;BR /&gt;&lt;BR /&gt;&lt;P&gt;Created a separate management group for all of them and created a volume that the ESXi server would connect to.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I created datastores on the volumes and deployed a clean&amp;nbsp;VSA 9.5 OVF&amp;nbsp;on each datastore.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;BLOCKQUOTE&gt;Your results are interesting, but could you clarify the above quote for me? Did your test consist of a single ESXi host with three VSAs, each in it's own cluster, serving up a single volume? I would like to try and duplicate your test as closely as possible to see if I experience the same results.&lt;/BLOCKQUOTE&gt;&lt;BLOCKQUOTE&gt;Thanks.&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Thu, 26 Jan 2012 15:25:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509415#M4501</guid>
      <dc:creator>5y53ng</dc:creator>
      <dc:date>2012-01-26T15:25:09Z</dc:date>
    </item>
    <item>
      <title>Re: Performance issues when using VSA on ESX with VMXNET3 driver</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509735#M4502</link>
      <description>Correct, single ESXi host with three VSAs, each in it's own cluster, serving up a single volume</description>
      <pubDate>Thu, 26 Jan 2012 20:34:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/performance-issues-when-using-vsa-on-esx-with-vmxnet3-driver/m-p/5509735#M4502</guid>
      <dc:creator>Wvd</dc:creator>
      <dc:date>2012-01-26T20:34:26Z</dc:date>
    </item>
  </channel>
</rss>

