Around the Storage Block

Reclaiming disk space with VMware vSphere and HPE storage

Learn how reclaiming disk space with VMware vSphere UNMAP and HPE 3PAR and Nimble Storage allows you to make the most efficient use of your storage capacity.

vSphere_HPE storage_blog.jpgThe ability to reclaim provisioned storage is an important feature in vSphere that allows storage arrays to operate at maximum efficiency and ensure that previously used capacity becomes freely available again. The process of reclaiming storage is called UNMAP, which is a SCSI command that is used by storage arrays to reclaim disk blocks that have been written to after the data that resides on those disk blocks has been deleted. An UNMAP command is normally issued by a host which tells a storage array exactly which disk blocks contain deleted data so the array can unallocate them which increases the amount of available free space on the array.

So why can’t a storage array can’t just reclaim space on its own without having to be told to do it? The reason is storage arrays are unaware of what is happening inside a file system whether it is a Windows file system or a VMFS data store, essentially they have no visibility into what is happening within a LUN. As a result, they have no knowledge of when files or VMs are deleted or what disk blocks they reside on. Because of this, whatever is managing the file system (i.e. Windows or ESXi) has to tell the storage array when deleted data can be reclaimed.

How vSphere reclaims space on a storage array

Reclaiming space is important. vSphere offers several storage operations that will result in data being deleted on a storage array. These operations include performing a Storage vMotion where a VM is moved from one data store to another, when VM snapshots are deleted and when VMs are deleted. Of these operations, Storage vMotion has the biggest impact on thin provisioning because whole virtual disks are moving between disks which results in a lot of wasted space that cannot be reclaimed. In addition, when data is deleted within a VM that is thin provisioned, data can be reclaimed as the guest OS can send UNMAP commands that are passed on to the storage array.UNMAP.png

Performing an UNMAP of disk blocks can be a resource intensive operation on the storage array and because of this the way the behavior of the UNMAP function has changed across vSphere releases since its introduction as part of vSphere 5.0. UNMAP initially started off being a synchronous operation meaning disk blocks were reclaimed in real time as soon as data was deleted in vSphere. Due to some issues that were uncovered pretty quickly the UNMAP process turned into a manual process that had to be initiated via CLI commands starting with vSphere 5.0 U1 through vSphere 6.0. The manual process worked but it was resource intensive and time consuming as well as inefficient as instead of being aware of what space needed to be reclaimed it created a large balloon file and tried to reclaim everything it could.

Finally, in vSphere 6.5, the UNMAP process became automatic again, although this time as an asynchronous operation meaning disks blocks are not reclaimed in real time but as a background process at a fixed low rate (25 MBps). This was a welcome and long-awaited change as vSphere was finally telling the storage array exactly which disk blocks to reclaim just doing it at a slow pace. In vSphere 6.7 this was made even better by allowing a user configurable reclamation rate to be set to control how fast vSphere tells the storage array which disk blocks to reclaim. The reclamation rate can be set in the vSphere client from 100 MBps-to-2000 MBps. However, unless you are in a real hurry to get storage capacity back, it’s recommended to keep the rate on the low side to lessen the impact of the UNMAP operation on your VM workloads. You can also disable space reclamation completely if you have plenty of disk space and don’t want any resource impact from running UNMAP.

One key thing to note is that doing an UNMAP at the VM level (not guest OS level) is only applicable when using VMFS data stores. With VMware’s new Virtual Volumes (VVols) storage architecture doing an UNMAP at the VM level is not needed as a storage array if fully aware of which disk blocks a VM resides on as VM’s are written natively to a storage array without a file system. This is one big advantage of VVols, vSphere no longer has to tell the host what disk blocks to UNMAP as the array has full awareness already and can reclaim space at whatever rate it wants to. With VVols a storage array becomes much more efficient as all provisioning and reclamation operations are performed dynamically.

Whether you are using VMFS or VVols in guest space, reclamation can still be done for guest OSs that support it to allow for more granular space reclamation. This is necessary because vSphere has no visibility within a guest OS file system to know what files have been deleted. If the guest OS sends UNMAP commands vSphere just passes them on for the storage array to process them and reclaim space.

Where HPE storage fits into this reclamation picture

Being able to reclaim disk space allows you to make the most efficient use of your storage capacity to help lessen the need to add more. We fully support space reclamation on both HPE 3PAR StoreServ and HPE Nimble storage arrays. In fact, with 3PAR we were one of the very first partners to support UNMAP on day one of its initial release as part of vSphere 5.0 more than seven years ago.

At HPE, we have had a long tradition of supporting all the VMware integration areas right away. That continues to this day with our industry-leading support for VVols as a VMware design partner. Whether it’s UNMAP, VVols or plug-ins for VMware, HPE 3PAR and Nimble are ideal storage platforms that feature modern architectures that boost VMware ROI by enabling you to optimize your virtual infrastructure, simplify storage administration and maximize virtualization savings.

More great VMware-related content

Blogs: Check out our other posts here on Around the Storage block talking about VMware topics, including VVols and vSphere.

Webinar: Want storage for VMware to be easier? Register for A Farewell to LUNs—Discover how VVols forever changes storage in vSphere to learn how. The webinar takes place live on October 23, or you can catch the on-demand replay anytime. 

Looking ahead to VMword Barcelona: Get the scoop on what HPE’s got planned, where the only tame thing will be the IT Monster!

Eric Siebert_HPE.jpg

 Around the Storage Block blogger Eric Siebert, Solutions Manager, HPE.  On Twitter: @ericsiebert 




0 Kudos
About the Author


Our team of Hewlett Packard Enterprise storage experts helps you to dive deep into relevant infrastructure topics.


Just as a side note, UNMAP on VMFS6 works fantastic on HPe MSA SAN Storage as well! :)

Steve Owsley

Nice read.  This space reclaimation at the vmware level is interesting and worth further review as we get more upgrades to 6.5 and 6.7.

My questions are more at the guest level.  I have two customers using thin HPE tech.  One is using a all flash 3PAR.  The other has a newly implemented Simplivity.  I am only familar with guest OS level of managing dedupliction and reclaiming space.  With the 3PAR I had to thick provision all the vmdk's, then run sdelete to zero all the free space and get the dedupliction from all the "0" space.  Does the SimpliVity work the same way?  I am not finding any doc on this yet.  And second.  Is there maintenace i need to do for vm's that change alot, as in rerun the sdelete on a routine schedule, to zero out the space? 


Just as a side note, UNMAP on VMFS6 works fantastic on HPe MSA SAN Storage as well!



Incorrect, HPE MSA 2050 does not support UNMAP from esxi automatically as it doesnt support 1Mb blocks, only 4MB.