- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Recover data from a previous snapshot via Hyper-V ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2018 06:55 AM
09-05-2018 06:55 AM
Recover data from a previous snapshot via Hyper-V / CSV Volumes
Hi,
We have nimble san and a hyper-v cluster. How does one mount a previous snapshot as a unique volume in order to retrieve data from it? As the ID`s are the same i can`t see how to mount this with a unique disk signature?
Cheers,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2018 10:48 AM
09-06-2018 10:48 AM
Re: Recover data from a previous snapshot via Hyper-V / CSV Volumes
Hi Cloud33,
kindly check if below will help & that needed,
With the original parent volume still presented to the cluster, connect the cloned volume to a node outside of the original cluster, resignature the corresponding disk via diskpart, disconnect, and present the clone back to the original cluster nodes. Diskpart UniqueID requires Windows 2008 or later [2].
- [ARRAY] Via Edit context on the cloned volume, temporarily update the access/igroup.
- [HOST] Using NCM or iScsiCpl, connect to the cloned volume on a node outside of the cluster.Note: At this point, with the cloned volume connected to a node outside of the cluster, you certainly have the option to copy off the required files for recovery back to the intended target over the network. However, the remainder of this KB discusses the signature process with the intent of presenting the cloned volume back to one or more nodes in the original cluster.
- [HOST] Once connected on a node outside of the cluster, identify the disk# of the cloned volume.
- For Nimble Connection Manager (NCM) , select the Nimble Volumes tab and note the disk# in the Mapping Info colu mn for the cloned volume of interest.
- For iScsiCpl, under the Targets tab, select the cloned volume of interest, Devices , and note the number of the Legacy device name: PhysicalDrive# , click OK , click OK .
- [HOST] Start > Run > cmd > OK
- [HOST] diskpart
- [HOST] list disk
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 40 GB 0 B
Disk 1 Online 10 GB 0 B *
Disk 2 Online 10 GB 0 B *
- [HOST] select disk # (where # is the disk number in question noted previously)
DISKPART> select disk 1
Disk 1 is now the selected disk.
- [HOST] detail disk
DISKPART> detail disk
Nimble Server Multi-Path Disk Device
Disk ID: { 8A014FBD-47E1-4919-A69E-B659A6E63244 } <--** BEFORE DISK ID CHANGE
Type : iSCSI
Status : Online
Path : 0
Target : 3
LUN ID : 0
Location Path : UNAVAILABLE
Current Read-only State : No
Read-only : No
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crashdump Disk : No
Clustered Disk : No
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 3 E MBX09-TESTD NTFS Partition 9 GB Healthy
- [HOST] attributes disk clear readonly
DISKPART> attributes disk clear readonly
Disk attributes cleared successfully.
- [HOST] uniqueid disk id= x
- Where x above is a unique MBR string or GPT GUID. For GUIDs, use tools such as http://www.guidgenerator.com/ or guidgen.exe to create a new globally unique signature. Note: Disk ID will need to be change by at least one single character. Then, the disk can be mounted back to any nodes in the original cluster.
DISKPART> uniqueid disk id= 8A014FBD-47E1-4919-A69E-B659A6E63245 <-- ** E.G., LAST CHAR FROM 4 to 5
DISKPART>
- [HOST] detail disk
DISKPART> detail disk
Nimble Server Multi-Path Disk Device
Disk ID: { 8A014FBD-47E1-4919-A69E-B659A6E63245 } <-- ** AFTER DISK ID CHANGE
Type : iSCSI
Status : Online
Path : 0
Target : 3
LUN ID : 0
Location Path : UNAVAILABLE
Current Read-only State : No
Read-only : No
Boot Disk : No
Pagefile Disk : No
Hibernation File Disk : No
Crashdump Disk : No
Clustered Disk : No
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 3 E MBX09-TESTD NTFS Partition 9 GB Healthy
- [HOST] Via NCM or iScsiCpl, disconnect the clone from this temporary host.
- [ARRAY] Via Edit context on the cloned volume, update the access/igroup on the cloned volume for access to the original nodes of the cluster.
- [HOST] Connect the cloned volume on the original cluster nodes, and take action on the disk (e.g., online the disk in diskmgmt.msc, assign the desired drive letter or mountpoint, use Windows Explorer or other tools to copy off the required files, add it back to cluster, or take any additional action needed for recovery).
Regards
Seenivasan-XP
If you feel this was helpful please click the KUDOS! thumb below!
***********************************************************************************