StoreVirtual Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

Design P4000 large LUNs for VMware?

MattGGG
Occasional Contributor

Design P4000 large LUNs for VMware?

Now that vSphere 5 can handle LUN sizes up to 64TB and VAAI alleviates (or minimizes) SCSI Reservations,  what are the downsides to creating large VMFS volumes on a P4000?

 

If all disks in a P4000 are wide-striped why does it make sense to do 3 x 1TB VMFS volumes vs 1 x 3TB volume?

 

Do P4000 LUN queues play a role in this design?  If so, how?

 

Did I miss any other factors?

 

Thanks,

-Matt

2 REPLIES
bryan_hadzik
Frequent Advisor

Re: Design P4000 large LUNs for VMware?

Some drivers could be more managment related:
Snapshot schedules

remote ip copy usage

But other than that, look what vmware even states on the FAQ:

 

"To encourage customers to use larger and fewer datastores, in ESXi 5.0, support for Thin Provisioning VAAI primitive has been added ."


I think they added this because of the extreme example of some customer using a 1:1 ratio of vm to datastore.

I like to land somewhere in the middle. A couple of large datastores, based on snapshot schedues.

L1nklight
Valued Contributor

Re: Design P4000 large LUNs for VMware?

I am new to the P4000 game but I've used VMware for quite a long time. While VMware supports large LUNs now, it's not always best practice to create large LUNs and cram a ton of vmdks on to it. Virtual disks mounted on the same vmfs volume share the same command queue. So while each individual logical disk in windows has it's on command queue (usually like 32 or 64 queue slots deep) multiple virtual machines that share the same vmfs share the same vmfs volume's command queue. While not always the easiest thing to maintain for the deepest command queues you would have c:, d:, and e: (for example) on different vmfs volumes. This also effectively isolates IO (assuming each vmfs volume is on a single LUN delivered from the SAN). If you were to put all virtual disks on the same vmfs volume, a badly behaving server/process could theoretically dominate the disk IO and by extension cause there to be issues on all other virtual machines sharing that vmfs volume. This is where enterprise plus features like Storage DRS comes in handy, taking the badly behaving virtual machine and moving it to a vmfs volume (using Storage vMotion) with less contention.