HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Application Integration
Showing results for 
Search instead for 
Did you mean: 

Number of VMs per datastore and alerting

Occasional Visitor

Number of VMs per datastore and alerting


I noticed the following posting by Nick recently



with the link to Jason Boche's blog post:

VAAI and the Unlimited VMs per Datastore Urban Myth » boche.net – VMware vEvangelist

and the recommendation

"10 HIGH IO VMs, 15 AVERAGE IO VMs or 20 LOW IO VMs."

Other than monitoring VM latency is there any proactive way of monitoring for datastores beginning to breach thresholds where performance will be impacted?  I've read various blog posts on using esxtop but most of those are for diagnosing the root cause once you have a performance issue.

Overall it would be good to be able to alert on any LUN resource constraints for the VMs before they actually experiences issues.  I was also thinking some of those "LOW IO" VMs could become high IO VMs and cause a performance impact for all the VMs on that LUN.




Re: Number of VMs per datastore and alerting

I've heard good things about products from VM Turbo and Solarwinds to provide this sort of monitoring and alerting within a vSphere environment. Perhaps Storage DRS could also be used to facilitate these sorts of concerns around latency/disk queues on a datastore level?

Occasional Visitor

Re: Number of VMs per datastore and alerting

Thanks Nick, we currently use Solarwinds so I'll have a look in there.

Doesn't using SDRS result in all of the Thin Provisioned LUNS eventually ending up thick until you run UNMAP?


Re: Number of VMs per datastore and alerting

We've taken on a hybrid plan for our datastores, a mix between the unlimited VMs per DS and specific DS for high IO VMs.

What we did was leverage our VMware Ent+ license level with datastore clusters and then broke them out to OS, DA, DB, LG-NC clusters of several DS each. OS for C drive, OS partitions and so on. DA for data partitions, D drives, etc. DB for database and high IO drives. LG-NC for logs and non-cache drives such as backups and so on. We just allow for storage DRS to handle IO or latency issues as they arise. We are alerted and we can take action if the issue lingers for a prolonged period. Short anomalies aren't much of a concern.

If there are specific machines that require unique settings we can create those datastores and apply whatever we need there.