Application Integration
1752795 Members
6098 Online
108789 Solutions
New Discussion юеВ

Re: What DataStore size do you use?

 
eperez101
New Member

Re: What DataStore size do you use?

500 GB or 2 TB in what most customers use.  We work on the no more than 10-15 VM's per datastore if you are doing snapshots using VSS.

warkaj1
Advisor

Re: What DataStore size do you use?

What I'm doing now is placing VMs in datastores according to their function and IOPs.

jrich52352
Trusted Contributor

Re: What DataStore size do you use?

with nimble that technically shouldnt matter

aspidle65
New Member

Re: What DataStore size do you use?

We have moved away from a few big datastores to single vm datastores.  We did this to maximize flexibility in replication schedules and restoration of a particular vm or vmdk.  We size the datastore appropriately to the particular vm and vmdk combination.

SElaschuk
Occasional Advisor

Re: What DataStore size do you use?

This is what we do as well, based off of http://www.nimblestorage.com/docs/downloads/Nimble-Storage-Architecting_Storage_in_Virtualized_Environments.pdf may or may not be manageable depending on your environment (number of VMs/etc)

jrich52352
Trusted Contributor

Re: What DataStore size do you use?

well that can get scary if you are using VMWare, since there is a limit of 256 volumes.

I had actually considered doing this and creating a single VM and doing a zero copy clone to generate new ones but my VM env has over 256 servers, so that wouldnt work out well for me.

I'm actually surprised how some of you are handling this, since the IO issue isnt really an issue with nimble. because you dont manage the actual disks, performance is (should be) the same if you use one volume or 100 volumes. the only real reason is for the performance policy, but per another conversation you'd stick with the VM policy unless you do a raw data map.

the only reason I stick with the 2tb is that from what i've heard anything larger can make SRM angry. I dont currently use it but i'd like to stick to a configuration that could easily implement SRM.

thanks for all the input guys!

julez66
Frequent Advisor

Re: What DataStore size do you use?

For VMFS datastores 2TB volumes here, short of volumes we use for test and lab purposes.

We decided to stay at a max of 10TB for iSCSI windows guest initiated volumes (most of this is archived videos and such anyway).

wen35
Trusted Contributor

Re: What DataStore Size Do You Use with VMware?

One way to get around the 256 volume limit is to group a number of alike VMs into a datastore, then do the zero copy clone, so it's more than 1 to 1 mapping.  If you only need a subset of the VMs from the source datastore, then you could simply not register them, or remove them from your script, follow by VMFS UNMAP to reclaim the space.

One comment on separation of volumes based on I/O characteristics -- one consideration factor is DB vs. transaction log volumes - it is indeed a best practice to separate those as one would benefit from cache turned "on" (DB volume), whereas the transaction log volume causes unnecessary cache churn.  It is best to separate the two, leave the DB volume with cache enabled, and the transaction log volume with cache disabled.

Last but not least - interesting comment on SRM getting mad @ volumes > 2TB - I personally have not seen issues in this regard.  I do hear, however, that SRM hates RDM   I am checking with our QA team to see if they could comment on SRM & >2TB volumes.

marktheblue45
Valued Contributor

Re: What DataStore Size Do You Use with VMware?

Like most responses ..... It depends. If I wasn't using vcentre to snapshot then I'd have no issue having 50-100 plus on a ESX 5.5 datastore. Things change when you're using vm tools/vcentre to snapshot and I wouldn't go much higher than 20-25. Even that depends since some VM's might snap quickly and blow that guideline. There are many "considerations" not least the 256 volume limit per array. If you replicate both ways to an array every volume then divide by 2 leaving 128 live volumes per site. Obviously the volume limitation in our case might actually prevent us doing a lot of in guest attached iSCSI plus the possibility of using Zerto or SRM makes these site recovery option more complicated with scripting required. Coming around to the idea of creating Datastores for SQL Db, temp DB, logs and temp logs. No replication of temp DB or logs.