Array Performance and Data Protection
cancel
Showing results for 
Search instead for 
Did you mean: 

Delete all orphaned snapshots at once

SOLVED
Go to solution
Highlighted
solidous74
Occasional Visitor

Delete all orphaned snapshots at once

I know when you change volume collections and or rename them the previous snapshots no longer become managed and are basically orphaned.  Is there a way to do this en-mass without having to go to each volume individually?

This would save us a TON of time as we are changing volume collections to better line up with DR.

8 REPLIES
bennywatson131
Occasional Visitor

Re: Delete all orphaned snapshots at once

Agree, I've been wondering this myself.

It would save a large amount of time.

ndyer39
Honored Contributor
Solution

Re: Delete all orphaned snapshots at once

Hi Adam,

There's no way to do this manually as it could be dangerous en masse - however if you speak with support they will happily pass you a customised script which will clear these down for you should you wish.

paul_sabin
Occasional Contributor

Re: Delete all orphaned snapshots at once

Forgive me for posting in an older thread, but I came across this discussion when looking for a solution to the same problem. Since one didn't exist, I wrote a way to find and (optionally) remove orphaned snapshots using PowerShell and Nimble Storage's RESTful API's. The script can be found here: Use PowerShell and Nimble's RESTful API's to remove orphaned snapshots.

milovanov88
Advisor

Re: Delete all orphaned snapshots at once

@Paul Great work! Any human "busy work" is saved is a win in my book.. I do want to mention couple of gotcha's regarding unmanaged snapshots on Nimble Storage array.

Currently, an unmanaged snapshot is defined as any snapshot not currently managed by an active volume collection schedule.

This means that snapshots created on volume level manually, snapshot collections created manually, third party app snapshots/collections are considered "unmanaged" by the system.

Only schedule-generated snapshots which no longer have said schedule are the snapshots which become unmanaged due to configuration error/change.

An interesting case is where volume collection or a schedule is deleted, but one with the same name is created. In this case, the snapshots are in-fact unmanaged, but many scripts which use name comparison cannot detect them.

The more frequent and more impacting scenario, however, is where a schedule name or a volume collection name was changed without deletion/recreation first. In those scenarios, the snapshots with different name are actually still managed by the schedule and will follow the retention policy. Using name comparison, however will show those snapshots as unmanaged which could lead to unexpected deletion of the snapshot and restore point data loss.

Another very important to note, that it is possible to delete a snapshot which was left in "online" state. Thus, script should account for such scenario and not attempt to delete an "online" snapshot.

I hope this information provides clarification of the potential impacts of using scripts. However, as you have mentioned, script is marked as "Use at your own risk".

Regards,

Nickolay

mdenton128110
Occasional Advisor

Re: Delete all orphaned snapshots at once

Hi Paul

I cannot access that link to your generously shared script as it says I am unauthorized.  Boo hoo. 

tonygreenwood51
Occasional Visitor

Re: Delete all orphaned snapshots at once

Hi Mark,

If your issue is with Unmanaged snapshots you can remove them from the command line.  Connect to the array using PuTTY and log in as admin.

To list the unmanaged snapshots use this command: -

snap --list --unmanaged --all

This will give you the Volume name, Snap name, and Status but it can be a little tricky to read.

To remove a snapshot use this command

snap --delete <snap_name> --vol <vol_name>

IMPORTANT: Make sure you only delete snaps that are shown as Online = No

If there are a lot of snaps to delete it could be a bit laborious so you can script something to chop it up or chop it up manually


To chop it up manually I used to do this: -
- Get the list from the PuTTY screen or the PuTTY log file and paste it into Notepad++
- Do a Find & Replace of all instances of double space with a single space - Repeat until there are none left
- Do a Find & Replace of all single spaces with the TAB character (\t using Extended search mode)
- Highlight the list and copy paste straight into your favourite spreadsheet app

- Sort by Online and remove any marked as Yes
- Use the spreadsheet functionality to build individual command lines i.e.
  ="snap --delete " & B1 & " --vol " & A1

- Copy the resulting commands back into Notepad++ and you should be able to copy & paste from there straight into the PuTTY session

I used to paste them in 50 at a time but to make it a little easier I wrote a rather dirty VBScript which I can let you have


Cheers

Tony G

milovanov88
Advisor

Re: Delete all orphaned snapshots at once

Thank you for the post. I certainly appreciate the efforts, but i want to ensure everyone is aware that the current implementation of the "snapshot --list --unmanaged" CLI command will output snapshots which are:

a) manually created

b) third party generated

c) manual action generated ( promote, demote, restore, resize)

Those snapshots are special case because from the Nimble OS standpoint - they are "unmanaged".

From the technical standpoint, they are "managed by something else other than Nimble OS".

Thus, i would encourage everyone to please check the list of snapshots and ensure that the snapshots you are about to delete are actually not needed. There is no recovery of a snapshot after it was deleted.

Additionally, please take in account the downstream replica array. If you remove/delete snapshot, even if it is unmanaged which was serving the role of the common snapshot between the replica partners, the volume would have to be completely reseeded. This is usually a undesired condition as it takes time and WAN bandwidth.

tonygreenwood51
Occasional Visitor

Re: Delete all orphaned snapshots at once

Thanks for the input Nickolay, all good points.

The snaps we deleted were orphaned after a good deal of volume restructuring and VM migrations so no such issues but other users should be aware of your suggestions before attempting this

Cheers

Tony G