StoreVirtual Storage

ESXi VSA vs. Physical SAN


ESXi VSA vs. Physical SAN

I have two sites with a pair of lefthand nodes on each and ESXi servers on each, with 2GB fiber between them. Been running lefthand campus san (now known as mutlisite with single VIP) since 2004, and it has worked phenominally well.


Right now, I need to add more IOPs and capacity.


Option 1: 2xP4330  with 8x900GB 10K SAS drives.


Option 2:  2xDL380G8 ESXi hosts with 12 core, 196GB ram, 8x1.2TB 10K SAS drives in RAID 5.  With this configuration I would run about 30VMs on each ESXi host, plus the VSA appliance.  These would also be in an HA cluster together.


The physical SAN model has been rock solid for us for the last 9 years with lefthand\HP hardware, but it is tempting to jump to the VSA model. Seems to be the way the industry is heading, and according to Kate Davis at HP the 10TB limit is changing at some point.  The Dl380G8 would have 25 bays, and could easily have more spindles added to each chassis, and would less us run with less hardware and a lower cost.


Is anyone running VMs from the VSA hosted in the same ESXi host? What has your experience been?  I am wondering specifically about how well HA functions, how support handles issues and performance, but any other comments are appreciated.


Thanks all!

Honored Contributor

Re: ESXi VSA vs. Physical SAN

I"m not on esx, but run a pure VSA setup on hyper-v.  You can run into a chicken/egg situation if everything goes down and you are starting up everything at the same time it seems to need a little human interaction to make sure everything is running, but beyond that, we have been happy with VSAs on the VM hosts w/ the rest of production.  We actually run two VSAs on each host w/ two different data tiers in different clusters and have been happy with those results as well.  Just make sure you have your VMs running on your SAN set to delayed start long enough to allow the VSAs to boot before the other VMs (this is only an issue for if you shut down all hosts and that causes the san to shutdown as well...  if the other VSAs are running and your SAN is up this is not an issue).  Since shutting everything down should only happen when hell freezes over, this really shouldn't be an issue.


One word of caution.  You really need to pay attention to reboot cycles for your hosts.  I know this happens more often on hyper-v than esx, but if you don't wait for the restripe process to complete to regain redundancy and then you reboot the node with the live data on it you can lose access to your LUNs.  This is easy to forget about, but also easy to prevent if you are paying attention.  In your case, if you have a multi-site SAN and you are using NR10+1 or NR10+2 you shouldn't have any problem as long as you don't do reboots to both sites at the same time.


One more word of caution:  VSAs really benefit from faster CPU cores.  The drawback of a VSA over the hardware option is you don't get as many cores to work with so under load its quick to run out of juice if you skimp on your host CPU GHz...  don't!



Trusted Contributor

Re: ESXi VSA vs. Physical SAN

So in additon to what said you need to be careful about "adding spindles" its not as eaay as they make it sound!!

You have to take the node out for reapir which can only be done with the help of support (or reboting then doing it  before the node comes online) add the disk, reconfigure the RAID and then let it restripe and then repeaat for each VSA. You also need to  be careful and remember to carve up the disk correctly. I am not too excited about the 2 TB disks as the really are probably Midline SAS labeled as SAS. The nice thing about the physical nodes is isolation.


However if done correctly and with sound mind and judgement VSA's work well.

As far as support?? If you have VSA's and are having perfomance problems you are pretty much on your own pay attention the memory requirements it is very critical in VSA's .