Array Setup and Networking
1753470 Members
5393 Online
108794 Solutions
New Discussion юеВ

Convert NTFS to VMDK

 
SOLVED
Go to solution
returnoftheyeti135
New Member

Convert NTFS to VMDK

I Have a 4tb array, with a 2tb lun on it. The 2tb lun is formatted NTFS and connected via ISCSI to a physical Windows server. I am retiring the physical server and introducing ESX into the environment.

I Plan on using an esx host with local storage to P2V the NTFS lun to a VMDK. I will then delete the 2tb LUN, create a 3tb LUN on the Nimble. Ill connect this LUN to a new esx server and format it with VMFS. . Finally I will storage vmotion the VMDK back onto the Nimble and the VMFS volume.

IS there any faster way of doing this ising cloning or snapshots?  Or is this the best way to do it? 

3 REPLIES 3
mblumberg16
Respected Contributor

Re: Convert NTFS to VMDK

As a suggestion you could consider attaching the current NTFS partitions as RDMs to the newly created VM, after doing so cold storage migrate(offline) the RDMs to VMDKs to the Nimble array.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005241

jleonardini58
Advisor

Re: Convert NTFS to VMDK

IF you don't like RDMs, the other alternative is to go Guest Initiator. install additional Vnics in the VM that you are planning take over the volumes, add the ports onto the iscsi network in VMware, configure the new ports in the guest onto your iscsi subnet and install NCM in the guest. Basically the VM traverses the VMware iscsi network and establishes its own connection. The VM will need its own initiator group on the Nimble array-don't mask the data LUN for ESX, mask it for that guest VM.

this is usually preferred over RDMs as it has fewer restrictions and makes the data volume more independent of the VM, which adds some flexibility in data recovery. The only thing is that SRM will not work with Guest Initiator mounted volumes.

chadr69
Occasional Visitor
Solution

Re: Convert NTFS to VMDK

I did this same thing about 2 months ago, send me a message if you'd like further details. My scenario was 2x 2TB and 1x 1 TB luns connected to physical windows server running SQL database.  I actually did a P2V of the OS partition a week ahead of time to clean up all the hardware drivers, devices, services, disabled iscsi, setup networking, etc...  At migration time I connected the luns via RDM then did a storage vmotion as outlined in the link provided by Moshe Blumberg.  Verify your RDM SCSI ID's before booting.  If drives don't show up in windows, check disk management.  If they're not there, check your event log for for failed service dependencies.

Some things to check, depending on your array config (not sure how AF is affected):

  • Block Size.  A block size other than the default 4k, could be optimal.  You can ask support for a "block size analysis" on your affected volumes before you move them and create a performance policy accordingly. I didn't think to check that ahead of time and ended up with a 4x duration increase for our sql backup jobs, however normal use was unaffected.  With help from support, we increased the cache in our expansion shelf and performance has been good.
  • Caching.  To minimize the duration of cache re-population I contacted support and we created an "aggressive cache" performance policy. I temporarily assigned that policy until cache hits were at an expected rate then switched it back to the long-term performance policy, of course with the same block size.  I guess the aggressive cache setting could just be removed too... your preference