StoreVirtual Storage
1748019 Members
4232 Online
108757 Solutions
New Discussion юеВ

Running multiple VSAs in the same ESX Host

 
Andres M
Occasional Contributor

Running multiple VSAs in the same ESX Host

I have a beefy server and I'd like to run more than one VSA VMs in the same ESX Host. I read the user guide but it does not really talk abotu this. Is this supported ? If so, Do I have to create individual VMFS datastores for each VSA? or they can share a unique one?

Thanks in advance,

Andres
10 REPLIES 10
Niels Vejrup Pedersen
Respected Contributor

Re: Running multiple VSAs in the same ESX Host

To my knowledge its only supported for test environments - but there is not restriction in doing so.

e.g. if you want to share out more than 10TB of local storage you will need more than one VSA.

You will be required to create individual VMFS volumes for assigning disks to the VSA e.g. 2TB VMFS to 2TB VMDK for VSA.

I'm not sure why they state this restriction in the VSA, you could occasionally share them if you create disks smaller than 2TB - but it's not supported per the documentation.

Regards
Andres M
Occasional Contributor

Re: Running multiple VSAs in the same ESX Host

Niels,

Thanks for your reply, could you tell me what document mentions that it is not supported in production?

tks

Andres
Niels Vejrup Pedersen
Respected Contributor

Re: Running multiple VSAs in the same ESX Host

Hello,

About running more than one VSA on the ESX host i cannot find anywhere, just remember that a guy from LeftHand did not recommend doing this.

About sharing VMFS volumes with other machines is printed in the VSA user guide.

Regards
Gauche
Trusted Contributor

Re: Running multiple VSAs in the same ESX Host

I read two slightly different questions in this.
1- Is two VSA on the same VMFS datastore supported.
No, it is not as stated in the manual. The reason is that when VMs of any type, including VSA, are on the same datastore they each mush lock and unlock the storage repeatedly. Putting two SAN VMs (The VSA) on the same datastore causes so much locking and unlocking that performance degrades too much.
2-Is multiple VSAs on the Same ESX server supported.
Yes, but is not a best practice. There are few cases that this makes sense, but when it does we do support it. A few customers are actually running this way in production.
The reasons are usually around building some test/dev/lab environment or some special disk / data layout where it makes sense. The thing to be very wary of though is putting VSA on the same ESX cluster that are in the cluster or are both "managers" in the same management group. Essentially the fear is that you'll build a configuration where the reboot or loss of one ESX server takes down enough of the right (or is it wrong?) VSA causing data to go offline at a time you did not expect.

I hope that helps.
Adam C, LeftHand Product Manger
teledata
Respected Contributor

Re: Running multiple VSAs in the same ESX Host

I've been doing some testing and performance metrics with vSphere 4 / VSA 8.1, and one thing you may want to consider would be RDM for the VSA disks. I'm not sure if this would eliminate some of the "VMFS" locking issues perhaps?

I'm also unsure of the official HP support policy regarding RDM. From all the performance docs VMware released about rdm vs vmdk I expect to see little difference (IOP-wise), although none of there tests were using a virtual san product, more database or streaming tests, and their published results were on ESX 3.5.

I'd like to hear if anyone else has done any VSA testing on RDM.

I also noticed some better IOPs after applying the patch (10050) that brings the vmware tools to the latest version.
http://www.tdonline.com
Uwe Zessin
Honored Contributor

Re: Running multiple VSAs in the same ESX Host

> each mush lock and unlock the storage repeatedly.

What for?

The VSAs are not constantly being powered on/off, the VMDKs are not changed, no snapshots etc. The only meta-data changes I anticipate are due to maintaining the VM's logfiles.

I understand that no datastore sharing is allowed to prevent another VM from 'stealing' I/O capacity.
.
Niels Vejrup Pedersen
Respected Contributor

Re: Running multiple VSAs in the same ESX Host

Gauche - I don't agree with the argument of getting bad performance due to locking - locking only happens at metadata updates, and almost none will be made unless you do certain operations. e.g. snapshots or poweron/poweroff and so on.

Teledata - I'm not sure why you want to use RDMs for this (i'm actually not sure if it's supported by the VSA) - But performance wise you will gain little or not even noticeable performance gains.

Regards
Uwe Zessin
Honored Contributor

Re: Running multiple VSAs in the same ESX Host

Very often you cannot set up an RDM on a 'local disk', because it is missing some SCSI characteristics required by ESX.
.
Gauche
Trusted Contributor

Re: Running multiple VSAs in the same ESX Host

Good points,I don't really disagree with any of them. Just adding some more of my egotistical opinion ;)

RDM - If you can I'd do it but without much expectation of higher performance. Really I'd just do it because conceptually it is simpler and you are eliminating an unnecessary layer of virtualization.

Shared VMFS / volume locking - Expanding on the reason we don't like the VMFS being shared with other VMS... It's true that the VSA probably won't lock the volume often because the VSA should not be rebooting often or doing vmotion. The real concern is that we don't know that about the other VMs you might put on the volume with the VSA. If you put 4 other VMs on the same VMFS as the VSA,and those other VMs reboot or vmotion often (perhaps due to DRS) then the VSA will experience IO pausing whenever those VMs lock the datastore. So I'm not concerned about the VSA hurting the other VMs performance, I'm concerned about the other VMs stealing performance and locking the vmfs and hurting the VSA.

And honestly, my concern might be more out of fear then reality. I just say that because I've only actually seen one field issue where other VMs on the same datastore were messing up the VSA performance and moving the VMs off the VSA's same datastore solved this issue.
Adam C, LeftHand Product Manger