StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Bad iSCSI performance with VMware

Occasional Contributor

Bad iSCSI performance with VMware



We recently bought a HP LeftHand iSCSI SAN we're testing before using it in our production enviroment.


When using it as datastore in vmware we have horrible performance. So I hope someone have some suggestions to what the problem can be.



Test setup :


2x Dell R210 II servers (Core i3 2100, 2GB memory, 2x Broadcom BCM5716 nics)

1x HP Lefthand P4300 G2 SAN


Lefthand :

- 2 volumes (1 NTFS for Windows test and 1 VMFS5 for vmware test).


Machine 1 :

- Windows 2008 R2 installed locally on drives. Using MS iSCSI initiator to connect to the NTFS volume


Machine 2 :

- VMware ESXi5 installed locally, with Windows 2008 R2 guest installed on VMFS5 volume




- Connecting from machine 1 to the NTFS volume gives throughput 900mbit both read and write.

- Connecting from machine 1 to machine 2 via Windows sharing/NetBIOS gives transfer ~200mbit.

- Connecting from machine 2 to the NTFS volume gives ~200mbit transfer


So as long as the virtual machine is involved it gives bad performance.


I tried enabling flow control and all tcp offload features in the ESX host via ethtool in the console.

I tried installing all three Broadcom driver packages on the ESXi host (just installed the vib packages)

I tried running ESXi 4.1 instead. Same problem

I tried migrating the virtual Win2008 to the local datastore and transfering to NTFS volume via the MS iSCSI initiator. Same bad performance

I tried running both auto negotiation and 1000M/FULL on the VMNIC.

I tried using different virtual nics on the guest (E1000, VMXNET2 and VMXNET3)




Configuration on Lefthand :

- 2 nodes added in same site running standard cluster. Both using an IP address in the same subnet, with a Virtual IP configured for the cluster.

- I have enabled Load balancing for the server/cluster so the lefthand will load balance via the virtual IP.


ESXi host configuration :
- Added standard VMware iSCSI initiator and set the virtual IP as target.

- Added a datastore with the VMFS5 volume. Under the paths it's set to "fixed" as it uses a virtual ip it cannot use round robin.



The Dell R210 II is not very powerful servers, but it's our test equipment. Although suspecting that they might not be strong enough for virtualization, I talked to other people using VMware aggreeing that is should at least be able to run 1 guest obtaining the same performance as a the Windows 2008 R2 installed on the local drive considering it's identical hardware.


Any suggestions is welcome


Valued Contributor

Re: Bad iSCSI performance with VMware

you only have two NICs in your test server? If both of them serve iSCSI-Connections, this would be a bottleneck in your test. All reads and writes to and from a VM would first have to go through the iSCSI initiator and then back through the public network of your ESXi. If both reside on the same NIC, this alone would effectively halve the network bandwidth.


How is the performance of the VM if it only reads and writes on its VMDK disk? I'd test that with a tool like iometer, so that nothing interferes with the iSCSI network traffic from and to the Lefthand SAN. On a production environment you would typically have 2 or more NICs for iSCSI only and use Round Robin to balance traffic between them and the Lefthand nodes.


Re: Bad iSCSI performance with VMware

Your testing methodology needs to take into account a bunch of things as well - unrelated to the vendor of the iSCSI platform - because you are going to see the same results regardless of vendor in your test case.

Things like the VMware multipath configuration, the placement of the VMs (same SAN volume or different ones), Windows Copy is a terrible test of performance.

The onyl way to test these kinds of SANs is to simulate mutliple streams of workload to the same SAN - using tools like IOMeter etc.