- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- How Nimble Storage uses VSS Part 2
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2018 11:01 AM - edited 03-06-2018 11:36 AM
03-02-2018 11:01 AM - edited 03-06-2018 11:36 AM
How Nimble Storage uses VSS Part 2
This is part 2 of a multi-part series. To read part 1, see https://community.hpe.com/t5/Array-Performance-and-Data/How-Nimble-Storage-uses-VSS/td-p/6998526
Now that you have scheduled backups of your dataset, let’s go through the process of accessing those files from either the original server, or from any other server in your environment.
The process I am going to discuss is not the simple process of taking the failed dataset offline, reverting the snapshot to a known good snapshot and bringing the dataset back online, that is far too simple.
Reverting to a previous snapshot doesn’t answer the question; how do I know what the last good dataset was? It doesn’t answer the question; what if I want to audit the backup process and be sure that I can import the dataset somewhere else successfully?
We are going to take a dataset, which may have scheduled snapshots every hour and mount them to the original server that hosts them. We can mount just the most recent, or we can choose to mount a larger set.
As an example, lets assume that my protected dataset is on Drive D:\
Lets also assume that I want to be able to peruse the backup dataset, maybe so that I can compare it to the production dataset side by side. To accomplish this task, I can use the following command from the production host;
PS C:\> Get-NimVolume
This will display which Nimble Volumes relate to which Windows Volumes. Use the ‘NimbleVolumes’ name that matches the Windows Volume you want to mount a snapshot of. If you plan to mount a snapshot to a server which doesn’t currently own the original, you will need to know the name of the Nimble Volume. You can get this information either from the GUI or by downloading the full Nimble PowerShell Toolkit and using the Get-NSVolume command which will retrieve information on ALL nimble volumes.
PS C:\> Get-NimSnapShot -NimbleVolumeName MyVolumeName | format-table Name,CreationTime
This will display all of the Nimble Snapshots and when each was created. If you plan to mount a snapshot to a server which doesn’t currently own the original, you will need to know the name of the Nimble Volume. You can get this information either from the GUI or by downloading the full Nimble PowerShell Toolkit and using the Get-NSSnapShot command which will retrieve information on ALL nimble volume snapshots.
PS C:\ $foldername = “C:\snaps\MyVolumeName\MySnapName”
PS C:\ mkdir $foldername
PS C:\ Invoke-CloneNimVolume -SnapShotName “MySnapName” -NimbleVolume “MyVolumeName” -AccessPath $foldername
This will mount the snapshot to a folder called C:\Recent\MyVolumeName\MySnapName
Lets say for a minute that you want to mount all of the snapshots so that you can search them. You can use the following commands;
PS C:\ for-each ($snap in (get-nimsnapshot -NimbleVolumeName MyVolName) ) { Invoke-CloneNimVolume -snapshotname $snap.name -nimblevolume MyVolName -accesspath (“C:\recent\MyVolName”+$snap.name) }
This command will take a long time to run since it will iterate through each Snapshot in turn, and Plug-and-Play detection can take 90 seconds per added volume.
I should note that the command will create new Disk Signatures for each clone as it is brought online so no volume collision happens. This ensures that the commands work in a both a standalone and clustered environment. Also note that these commands include options that allow the clones to be brought all the way into the cluster as well as automatically mounted as new WFC CSVs.
The next in this series will discuss how to use this technology to do single VM restores from a Windows Failover Cluster using CSV type disks.
To see part 3 of this post which deals with Hyper-V, see; https://community.hpe.com/t5/Array-Performance-and-Data/How-Nimble-Storage-Uses-VSS-Part-3/m-p/6998994#M1134