StoreVirtual Storage
1748267 Members
3739 Online
108760 Solutions
New Discussion

Re: Performance issues when using VSA on ESX with VMXNET3 driver

 
5y53ng
Regular Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

Your results reinforce my findings while testing. While my test configurations were much different than yours, I concluded there was either something wrong with the VSA or the iSCSI stack in ESXi 5. Write performance was terrible with this combination of VMware and the HP VSA.

 

Using ESXi 5 and VSA 9.5 I was unable to achieve more than 135 MB/ sec on IO meter with a sequential 64KB 100% write. I had much better results with ESX 4.1 and VSA 9.0, roughly 230MB/ sec with the same IOmeter test profile.

 

 

RonsDavis
Frequent Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

I'm going to build a pair of test VSA boxes, and run through a set of tests. The variables I'm looking at right now seem to be RDM vs VMDK, Hardware verion 4 vs 7 vs 8, 9.0 vs 9.5, and flexible vs VMnet3. Anyone else have any other ideas I should test against?

What I'll basically do is set up iometer on a VM and add the entire VSA available storage to it as a second drive. I'll give it 4 CPUs, with 16 queued requests, since I'll have 8 drives in each node. Test will likely be "All in One", and run for at least a couple hours.

I'll post results when I'm done, which will be a week or so.

 

dch15
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

We are seeing similar problems with two DL380 G7's and the 9.5 VSA.  We're running ESXi 5 from SD cards on the DL380's and then using the local storage for the VSA. 

 

Have you heard back yet from HP on whether or not they have confirmed the problem and that the fix is to use the E1000 driver and/or if it is related to the machine version?

 

Thanks,

 

Dan

M.Braak
Frequent Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

HP support told me that it is not their problem and vmware should solve it. HP support told me they do not support the e1000 driver but they do tell me to use it if it fixes my problem!?!?!
Vmware has build a reproduction enviroment to investigate the issue but i dont have an answer from them yet.
Using the flexible driver seems to be the best solution for now. :-(

I will keep you updated as soon as i have an answer from vmware.
dch15
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

HTH or others, can you explain how you got RDM's to work on local storage with the VSA?  When I try to set up the VSA with using RDM's it is grayed out.  Do you know if this is supported by HP?  It makes sense to me that an RDM is more 'direct' than using VMDK's on VMFS but I don't see anything about it in HP's docs.

 

We have a setup with 2 DL380's with local storage.  We used SmartStart to set the drives up in a RAID 5 array with two logical disks (one small for the VSA itself and the other large for everything else on the array).  Then we installed ESXi 5 on the small partition.  When we go to set up the VSA's how do we use RDM's with it?

 

Thanks,

 

Dan

DPHP
Frequent Visitor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

you can do it like described at vmware KB 1026256 Creating a physical RDM

vmkfstools -z /vmfs/devices/disks/<device> example.vmdk

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1026256

 

which is blocksize do you use at the RAID5 level 64k ?

dch15
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

Thanks for the quick response.  As for block size, we would have used whatever the SmartStart default for block size is when we created the RAID 5 arrays on the two DL380's.  What should we be using?

 

One question I have is if you think creating Physical RDM's on local storage will be supported in the future or if this is a feature that might go away in some future upgrade?

 

Also, do you have your logical disks set up so the first one is small just to hold ESXi (if not booting from an SD card) and the VSA, and the second logical disk represents the rest of the storage that you would point to with the physical RDM?  Also how would you prepare/format the second disk for the physical RDM?

 

I apologize but I am not as familiar as I should be with RDM's and their use so if you could describe in a little more detail how to do this, I would greatly appreciate it.

 

Thanks very much,

 

Dan

 

virtualmatrix
Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

 

> One question I have is if you think creating Physical RDM's on local storage will be supported in the future or if this is a feature that might go away in some future upgrade?

 

Support is a question that only HP or VMware could address authoritatively, but it would be a tough for them to remove such features now. 

 

But we can "intelligently speculate"... :-) 

For VMware: what they say is that by default local RDMs are not supported, but there *are* cases where local RDMs are fine (refer to KB 1017530).  It comes down to whether your controller exports a globally unique ID (refer to KB 1014953).  In our case, VMware engineering specifically recommended local RDMs. 

 

A few more KBs to check out:

http://kb.vmware.com/kb/1017530

http://kb.vmware.com/kb/1017704

http://kb.vmware.com/kb/1014953

 

For HP:  the VSA just sees virtual disks (VMDKs).  It doesn't know or care what is physically behind that virtual disk - be it a VMDK on top of a VMFS, a mapping to a block storage device, or even a mapping to another iSCSI device.  Where the VSA *does* care is regarding the performance (latency, etc) characteristics of the device.  So, while you *could* give your VSA a pile of VMDKs that live on remote NFS or iSCSI or FC, you would want to understand the implications vs VMDKs living on local storage.  (I've heard of individuals using this feature to consolidate storage from other vendors into SAN/IQ.)  Anyway, this brings us back to the benefits of local RDMs... which, in theory, by removing layers of complexity, should perform slightly better than the default VMDK on VMFS.

 

> Also, do you have your logical disks set up so the first one is small just to hold ESXi (if not booting from an SD card) and the VSA, and the second logical disk represents the rest of the storage that you would point to with the physical RDM? 

 

You've got it. 

We boot ESXi from a USB device.  We have a tiny logical disk to hold the VSA "OS" disks.  We then divided the rest of the storage into < 2TB logical disk chunks (although we have tested > 2 TB local RDMs with ESXi 5.0).  Each of these logical disks appears as an naa.* device in /dev/disks.  We created RDMs from these devices and then added those RDMs to the VSAs as VMDKs.

 

> Also how would you prepare/format the second disk for the physical RDM?

 

No need to prepare them any differently than a regular VMDK.  You just add them to the virtual machine (VSA) as disks like you would with regular VMDKs -- on their own virtual controller, etc.  In the VM's settings, instead of creating the VMDK on a VMFS, you'll add a VMDK that "already exists" and then point to the RDMs you created.

 

> I apologize but I am not as familiar as I should be with RDM's and their use so if you could describe in a little more detail how to do this, I would greatly appreciate it.

 

Don't apologize -- this method is a bit more tricky, so you can see why it is not something that HP may want to document in their normal "quick-start" guides.  But for those running into strange problems that could stem from the extra layers of plumbing or those wanting/needing to glean more performance, this is a path to consider.  We went "all in" on this method after it resolved our serious performance problems, but it may not be a good solution for your case.

 

Hope that helps. 

 

For anyone else reading this far (and still awake) -- have you tried RDMs?  Did it make any difference?  Any other tips/tricks?

 

dch15
Occasional Advisor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

Wow, that's incredibly helpful.  I need time to digest this.  I appreaciate the detailed description. 

 

Dan

DPHP
Frequent Visitor

Re: Performance issues when using VSA on ESX with VMXNET3 driver

@virtualmatrix: can you suggest  > 2 TB local RDMs with ESXi 5.0? do you also recognized the (300+) latenzys with vm version 7 ? did you played arround with the underlaying raid level (raid 5) block size? or did you used always the default blocksize 256k? thanks for your feedback