- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Bad iSCSI performance with VMware
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-08-2011 02:34 AM
тАО12-08-2011 02:34 AM
Bad iSCSI performance with VMware
Hello,
We recently bought a HP LeftHand iSCSI SAN we're testing before using it in our production enviroment.
When using it as datastore in vmware we have horrible performance. So I hope someone have some suggestions to what the problem can be.
Test setup :
2x Dell R210 II servers (Core i3 2100, 2GB memory, 2x Broadcom BCM5716 nics)
1x HP Lefthand P4300 G2 SAN
Lefthand :
- 2 volumes (1 NTFS for Windows test and 1 VMFS5 for vmware test).
Machine 1 :
- Windows 2008 R2 installed locally on drives. Using MS iSCSI initiator to connect to the NTFS volume
Machine 2 :
- VMware ESXi5 installed locally, with Windows 2008 R2 guest installed on VMFS5 volume
...............
- Connecting from machine 1 to the NTFS volume gives throughput 900mbit both read and write.
- Connecting from machine 1 to machine 2 via Windows sharing/NetBIOS gives transfer ~200mbit.
- Connecting from machine 2 to the NTFS volume gives ~200mbit transfer
So as long as the virtual machine is involved it gives bad performance.
I tried enabling flow control and all tcp offload features in the ESX host via ethtool in the console.
I tried installing all three Broadcom driver packages on the ESXi host (just installed the vib packages)
I tried running ESXi 4.1 instead. Same problem
I tried migrating the virtual Win2008 to the local datastore and transfering to NTFS volume via the MS iSCSI initiator. Same bad performance
I tried running both auto negotiation and 1000M/FULL on the VMNIC.
I tried using different virtual nics on the guest (E1000, VMXNET2 and VMXNET3)
Configuration on Lefthand :
- 2 nodes added in same site running standard cluster. Both using an IP address in the same subnet, with a Virtual IP configured for the cluster.
- I have enabled Load balancing for the server/cluster so the lefthand will load balance via the virtual IP.
ESXi host configuration :
- Added standard VMware iSCSI initiator and set the virtual IP as target.
- Added a datastore with the VMFS5 volume. Under the paths it's set to "fixed" as it uses a virtual ip it cannot use round robin.
The Dell R210 II is not very powerful servers, but it's our test equipment. Although suspecting that they might not be strong enough for virtualization, I talked to other people using VMware aggreeing that is should at least be able to run 1 guest obtaining the same performance as a the Windows 2008 R2 installed on the local drive considering it's identical hardware.
Any suggestions is welcome
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-08-2011 05:32 AM
тАО12-08-2011 05:32 AM
Re: Bad iSCSI performance with VMware
you only have two NICs in your test server? If both of them serve iSCSI-Connections, this would be a bottleneck in your test. All reads and writes to and from a VM would first have to go through the iSCSI initiator and then back through the public network of your ESXi. If both reside on the same NIC, this alone would effectively halve the network bandwidth.
How is the performance of the VM if it only reads and writes on its VMDK disk? I'd test that with a tool like iometer, so that nothing interferes with the iSCSI network traffic from and to the Lefthand SAN. On a production environment you would typically have 2 or more NICs for iSCSI only and use Round Robin to balance traffic between them and the Lefthand nodes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-08-2011 01:06 PM
тАО12-08-2011 01:06 PM
Re: Bad iSCSI performance with VMware
Things like the VMware multipath configuration, the placement of the VMs (same SAN volume or different ones), Windows Copy is a terrible test of performance.
The onyl way to test these kinds of SANs is to simulate mutliple streams of workload to the same SAN - using tools like IOMeter etc.