Array Setup and Networking
1752800 Members
5454 Online
108789 Solutions
New Discussion юеВ

Re: Reclaim space quickly (sdelete alternative)

 
epedersen22
Occasional Advisor

Re: Reclaim space quickly (sdelete alternative)

Hooray тАУ this version is working much better!

a_user57
Advisor

Re: Reclaim space quickly (sdelete alternative)

Yup, awesome! Thanks very much!

dsmelser93
New Member

Re: Reclaim space quickly (sdelete alternative)

I tested on the ESXi with with a windows 2008 R2 virtual machine using *.vmdk files. The script runs without errors.

From within windows, everything looks great, lots of free space.

From Nimble, everything looks great, the volume size matches the use space reported by windows.

However, from vCenter shows a size warning on the datastore and when I browse the datastore, I see that the vmdk it at its maximum size. The virtual disk was thin provisioned, but now looks like a think provisioned virtual disk.

This does present a couple of challenges:

a) It affects how I provision datastores and virtual disks, because  vCenter warnings about datastores have been configured assuming a thin provisioned vmdk, but the script turns them to "thick" provisioned.

b) This is going to increase backup and recovery times. While the zeroed out space does compress well and won't take up space on the backup repositories (we use Veeam), it is increasing backup duration because all the zeros have to be read.

Does anyone have suggestions on how to best reduce the size of the vmdk file?

Daniel-san
Frequent Advisor

Re: Reclaim space quickly (sdelete alternative)

On a VMDK, this is how you want to do it:

cd /vmfs/volumes

esxcli storage vmfs unmap -l VMFS-1

(VMFS-1 is the volume name)

Check out Nimble kb_customerPortal_KB-000065_Block_Reclamation.pdf article

dsmelser93
New Member

Re: Reclaim space quickly (sdelete alternative)

Can you provide a link to Nimble kb_customerPortal_KB-000065_Block_Reclamation.pdf

I canтАЩt find it.

Daniel-san
Frequent Advisor

Re: Reclaim space quickly (sdelete alternative)

mcdonamwION
Frequent Visitor

Re: Reclaim space quickly (sdelete alternative)

Thank you for your script.  I'm finding great use in it, but I'd like to share some of my findings.  When running a single thread of this script, it seemed that your copy mechanism is faster than the writing of the entire file, as was the basis of the source script.  But with my environment, after doing a litany of tests, I found this was not optimal in my environment.

I have a CS460G on a 10Gbe network.  When running this script alone I found that my overall network utilization was rather low (hovering between 600-1000Mbps) and the write performance of the Nimble was about 100-200MBps.  Recent IOMETER runs on my LUNs allowed me to get much better performance than that.  I wondered if I could tweak the script to do the same and was successful, albeit it took a while to find the right combination.

First, I decided to simply try writing larger files.  I pushed the base file to 10GB while keeping the copy logic.  This in a single thread still underperformed.

Next, I then decided to run 4 copies of the script in parallel (creating separate files...  1_NimbleFastReclaim#, 2_NimbleFastReclaim#, etc, to avoid writing to the same file).  This seemed to be perfect.  I was pushing my network to 2500-3000 Mbps on average.  Nimble performance showed writes hovering between 300-400MBps.  As I watched the script run for 30 minutes or so, while monitoring the Nimble performance charts, I noticed the writes started slowing down over time.  I also noticed READS on the LUN that were just as high as the WRITES.  In fact I was saturating my 10Gbe interface on the server!  Apparently dealing with such large base files and the built in copy mechanism of the script becomes less beneficial as you run multiple threads as much of your network performance is wasted on reading from the LUN.

So opted to remove the copy functions entirely and just write the raw data directly to the LUN as the source file did (but multithreaded).  I also pushed the file to 100GB in size for sake of getting a good sample period.  This was with a 1024kb base and 102400 writeloop (I commented out the entire copy file section of the script, so just 1 file per script was created).  This got me all writes and no reads, as desired but even with 4 threads I was having a hard time pushing the same write performance as I was above.  I attributed this to the small base file size that is repeatedly being appended to the file.

I then tweaked the base file size to 102400kb, with a writeloop of 1024 to accomplish the same 100GB file [Write more data for longer was the thought] and I'm pleased to say doing this with 4 threads has me writing about 400MBps with 0 reads on the LUN.  I could probably push this higher with even more threads but I think I may already be tipping the Nimble near its max write performance (I don't recall).  All in all this seems much faster than the base script.  At the very least, I intend to now tweak the script to refactor in the ability to write multiple files per script (since I removed the copy mechanism).  Once I have something useful I'll post it here.

Hopefully this helps others.  YMMV with your environment.

Edit:  Attached is a version of the script I made today (since I lost my original copy) which accomplishes essentially everything I mentioned in my post above.  Please note, that it is written with the idea of manually running multiple copies of the script to increase write performance as I'm not sure how to max out my IO with a single file stream.  In order to run multiple streams, create copies of the script and modify the $ScriptStreamId variable to denote which stream it is.  Resulting files will have this stream number added to their name.  I'm sure there's a better way to do it (e.g. actually letting the script do multithreaded operations vs. the dirty manual threading) but I do not have the time to figure out a more optimal solution unfortunately.

Also note, I re-used much of DavidTan's script and variables though they may not make the most sense given what I'm doing.  Whereas David wrote one file and copied it X number of times, I replaced the copy routine with actual writes while reusing the WriteLoops and CopyIdx variables.  I've also edited the message displayed after the script is run as the existing math is incorrect due to copyidx being incremented one last time even if the while loop check fails and that iteration does not run as well as the modification of $ArraySize and $WriteLoops earlier in the script changes the actual amount of data that got written.

Lastly I modified the Try/Catch block to include a Finally section to properly close out the file stream and do cleanup even if the script errors out or is terminated with Ctrl+C, as I found myself doing often during the various edits I was making.  I had to keep closing Powershell to purge the files that were created so I could delete them.

***Additional edit.  It should be noted that I am running this command locally on the server that actually has the iSCSI mapped volume.  I see in earlier posts that there is a memory issue when using WinRM to run the script remotely and as a result the $ArraySize chunks were intentionally made smaller to overcome that issue.  My edits to the script will probably break that as I'm writing 102400kb chunks in order to maximize my IO.  Based on my own testing writing many smaller chunks did not push my IO as high as my system could handle.

Not applicable

Re: Reclaim space quickly (sdelete alternative)

Thanks for this David,

It works great for drive letters but the space calculation does not work for mount points and only writes 1GB file.

Do you have a different version that will work for mount points as well? If not I will work it out and post it here.

Cheers

Not applicable

Re: Reclaim space quickly (sdelete alternative)

Thanks Matthew,

did you manage to post the script anywhere? I am keen to see if this works for mount points too.

david_tan2
Valued Contributor

Re: Reclaim space quickly (sdelete alternative)

It should work for Mount points, well it does for me anyway... Just have to remember to escape the $ char in powershell with a backquote if there are any in the path name.

I'd be keen to try Matthews version as well for comparison. I am getting varied results running on different vm's. Returning space to the array on pre win 2012 is a big pain point.