- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Convert NTFS to VMDK
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2016 09:25 PM
тАО10-27-2016 09:25 PM
I Have a 4tb array, with a 2tb lun on it. The 2tb lun is formatted NTFS and connected via ISCSI to a physical Windows server. I am retiring the physical server and introducing ESX into the environment.
I Plan on using an esx host with local storage to P2V the NTFS lun to a VMDK. I will then delete the 2tb LUN, create a 3tb LUN on the Nimble. Ill connect this LUN to a new esx server and format it with VMFS. . Finally I will storage vmotion the VMDK back onto the Nimble and the VMFS volume.
IS there any faster way of doing this ising cloning or snapshots? Or is this the best way to do it?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-28-2016 11:43 AM
тАО10-28-2016 11:43 AM
Re: Convert NTFS to VMDK
As a suggestion you could consider attaching the current NTFS partitions as RDMs to the newly created VM, after doing so cold storage migrate(offline) the RDMs to VMDKs to the Nimble array.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-29-2016 08:00 AM
тАО10-29-2016 08:00 AM
Re: Convert NTFS to VMDK
IF you don't like RDMs, the other alternative is to go Guest Initiator. install additional Vnics in the VM that you are planning take over the volumes, add the ports onto the iscsi network in VMware, configure the new ports in the guest onto your iscsi subnet and install NCM in the guest. Basically the VM traverses the VMware iscsi network and establishes its own connection. The VM will need its own initiator group on the Nimble array-don't mask the data LUN for ESX, mask it for that guest VM.
this is usually preferred over RDMs as it has fewer restrictions and makes the data volume more independent of the VM, which adds some flexibility in data recovery. The only thing is that SRM will not work with Guest Initiator mounted volumes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-01-2016 09:16 AM
тАО11-01-2016 09:16 AM
SolutionI did this same thing about 2 months ago, send me a message if you'd like further details. My scenario was 2x 2TB and 1x 1 TB luns connected to physical windows server running SQL database. I actually did a P2V of the OS partition a week ahead of time to clean up all the hardware drivers, devices, services, disabled iscsi, setup networking, etc... At migration time I connected the luns via RDM then did a storage vmotion as outlined in the link provided by Moshe Blumberg. Verify your RDM SCSI ID's before booting. If drives don't show up in windows, check disk management. If they're not there, check your event log for for failed service dependencies.
Some things to check, depending on your array config (not sure how AF is affected):
- Block Size. A block size other than the default 4k, could be optimal. You can ask support for a "block size analysis" on your affected volumes before you move them and create a performance policy accordingly. I didn't think to check that ahead of time and ended up with a 4x duration increase for our sql backup jobs, however normal use was unaffected. With help from support, we increased the cache in our expansion shelf and performance has been good.
- Caching. To minimize the duration of cache re-population I contacted support and we created an "aggressive cache" performance policy. I temporarily assigned that policy until cache hits were at an expected rate then switched it back to the long-term performance policy, of course with the same block size. I guess the aggressive cache setting could just be removed too... your preference