StoreVirtual Storage
1748185 Members
3916 Online
108759 Solutions
New Discussion

Performance issues when using VSA on ESX with VMXNET3 driver

 
M.Braak
Frequent Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

When you deploy a VSA from the OVF it has two nics. One flexible and one VMXNET3.

I was told the flexible interface is for cases where management traffic is not possible over the same network as storage so you can delete one. VMXNET3 should offer better performance as the flexible so i always remove the flexible adapter.

 

The case at HP support didn't work out. HP's statement is that it's a VMware problem and they should solve it ?!?

 

The case at VMware is making progress. They are actively investigating this case. This morning i have performed several testing scenarios and collected logfiles for VMware to analyse.

 

I'll keep this thread  updated with the progress

Wvd
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

HP is very obscure why there suddenly are two nic's in the VSA 9.5. OVF. This is very confusing and no clear statement exists which adapter should be used for iSCSI traffic. HP should make a definite statement and release a revised OVF with one adapter or at least one TYPE of adapter...

Regarding performance issues, I am performing tests in my lab and seeing interesting results. Will post back soon...
5y53ng
Regular Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver


M.Braak wrote: 

When using two vSwitches VMWare uses a different path internally to communicate and this way TSO of the VMXNET3 driver could function properly. (I didn't tested this possible workaround however!)

 


Using two vswitches as described above, the iSCSI traffic must traverse the physical network to reach the gateway VSA. This is strange, since we would expect to benefit from using the the VMXNET3 with traffic that remains on the vswitch and does not cross the physical network. I believed this to be where the VMXNET3 adapter would provide some benefit, but I guess it's time to refresh my memory and read up on the different types of virtual network adapters...

5y53ng
Regular Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver


@M.Braak wrote:

When you deploy a VSA from the OVF it has two nics. One flexible and one VMXNET3.

I was told the flexible interface is for cases where management traffic is not possible over the same network as storage so you can delete one. VMXNET3 should offer better performance as the flexible so i always remove the flexible adapter.


I was never able to get the CMC to connect to the second NIC on any of my VSAs. The CMC would only connect to the NIC that was set as the SANiQ interface. When I would change the SANiQ interface, I was unable to reach the VIP on my iSCSI network. Is there something special you have to do in order to use the second NIC for management?

M.Braak
Frequent Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver


@5y53ng wrote:
I was never able to get the CMC to connect to the second NIC on any of my VSAs. The CMC would only connect to the NIC that was set as the SANiQ interface. When I would change the SANiQ interface, I was unable to reach the VIP on my iSCSI network. Is there something special you have to do in order to use the second NIC for management?

I never used two interfaces so i can't tell you but this is from the help function:

  • When configuring a management interface on a P4000 storage system, you must designate the storage interface as the SAN/iQ interface for that storage system in the CMC. This is done on the Communications tab in the TCP/IP configuration category for that storage system

Wvd
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

I have finished my testing and have come to the following conclusion:

 

Any P4000 VSA 9.5 with a hardware virtual machine version above 4 is performance impaired.

 

I have come to this conclusion by testing a lot of different configurations.

My test setup:

 

DL380 G7 with 12 450GB 10K SAS disks in RAID10

HP 2910AL switches

Dedicated iSCSI network and adapters

 

I tried a lot of different 9.5 VSA configurations but these are the most common:

 

  • VSA 9.5 with flexible adapter
  • VSA 9.5 with VMXNET3
  • VSA 9.0 with flexible adapter and upgraded to 9.5

Created a separate management group for all of them and created a volume that the ESXi server would connect to.

 

I created datastores on the volumes and deployed a clean VSA 9.5 OVF on each datastore. These would not be offering storage, they are just a quick test of virtual machine boot performance and latency to the datastores.

 

The results:

 

Booting the virtual machine resulted in latency spikes to 200-300ms on all datastores except for the upgraded 9.0 to 9.5 VSA with flexible adapter. Here latency never went above 5ms.

 

I also timed the boot to verify if performance was indeed impacted.

The upgraded VSA 9.0 to 9.5 booted the virtual machine in 1min 10sec

All other booted in 1min 30sec

 

The difference between an upgraded 9.0 to 9.5 and a newly deployed VSA 9.5 lies mainly in the fact that the upgraded VSA stays on VM hardware version 4.

My suspicion was confirmed when I upgraded the hardware version and the boot time of the test VM immediately went to 1m30s and high latency appeared.

 

As a final test I also tried using a raw device mapping to local storage on a new VSA 9.5 as mentioned in this thread. This improved performance, boot time went to 1m15s but latency was still too high and spikey.

 

These tests were performed on ESXi 4.1 U1 and 5.0. It made no difference.

 

I am definitely keeping my VSA's on HW version 4.

 

 

 

 

 

 

yaodongxian
New Member

Re: Performance issues when using VSA on ESX with VMXNET3 driver

How could you keep the VM hardware version 4 if you deploy the 9.5 vsa ovf to ESXi 5?

 

Since I am testing a 6 nodes P4000 VSA with ESXi 5, the VM hardware version is 7.

 

I am also experiencing crappy performance issue, I have 6 nodes in the cluster, each node has 5 disks, with VMXNET3 and one 10G port for vSwitch uplink. I only get about 30MB/s throughput.

Wvd
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver


@yaodongxian wrote:

How could you keep the VM hardware version 4 if you deploy the 9.5 vsa ovf to ESXi 5?

 


Only way is to deploy an older 9.0 VSA ovf and then use the CMC to upgrade it to 9.5.

Keep us posted on the results...

5y53ng
Regular Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver


@Wvd wrote:

Created a separate management group for all of them and created a volume that the ESXi server would connect to.

 

I created datastores on the volumes and deployed a clean VSA 9.5 OVF on each datastore.

 

Your results are interesting, but could you clarify the above quote for me? Did your test consist of a single ESXi host with three VSAs, each in it's own cluster, serving up a single volume? I would like to try and duplicate your test as closely as possible to see if I experience the same results.
Thanks.
Wvd
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

Correct, single ESXi host with three VSAs, each in it's own cluster, serving up a single volume