MSA Storage

storage vmotion fails between 2 x P2000 G3 FC arrays

Beer Grill

storage vmotion fails between 2 x P2000 G3 FC arrays

Hi everyone

My setup is as follows:

2 x DL385 G8 servers running VMware-ESXi-5.1.0-Update1-1065491-HP-5.50.26

2 x P2000 G3 FC arrays.

1 x DL385 G5 running vCenter (VMware-VIMSetup-all-5.1.0-1123966)

I have a single 8Gbps FC fabric on 2 xBrocade FCSWs running FOS v7.0.2c which is stetched between 2 sites (<1km apart) using long range SFPs.

There is a server and P2000 at each site.

Both servers have dual port Emulex 8Gbps FC HBAS and are dual connected to the above fabric.

I have zoned each Server's 2 x FC ports with each P2000 Controller FC ports.

I have created a single vDisk on each P2000, and a single 6TB volume on each P2000.

Both 6TB volumes from each P2000 have been mapped to the 2 x FC HBAs of each server.

Each ESXi server now see's 1 x vmfs 5 datastore from each P2000.


I currently have a problem in live migrating any VM between vmfs 5 datastores using storage vmotion. The process stops a 32% and to recover the situation I need to cancel the migration process and wait for it to timeout.


If I power off the VM, I can migrate it to the other datastore with no issues.


I can also live migrate any VM to the other host with no issues.


I have checked for errors on the FC ports on both switches and all look OK.


Any suggestions would be welcome.


Update 17/07/2013

After disabling VAAI on both Esxi hosts, I can now live migrate between datastores.

See attached doc.



Re: storage vmotion fails between 2 x P2000 G3 FC arrays


What firmware version of storage controller do you have in P2000 G3?

Note:Firmware TS230R044 (initial support for VAAI) and higher contains support for VAAI and I should have TS250xxxx firmware installed in P2000 G3

Im HP employee
Beer Grill

Re: storage vmotion fails between 2 x P2000 G3 FC arrays

Hi There

I did upgrade both P2000s FW as per the attached screenshot

New Member

Re: storage vmotion fails between 2 x P2000 G3 FC arrays


Based on the vMotion information you have provided, this appears to be latency issue between Site A and Site B.  


As you mentioned you can vMotion while the VM is in a down state.  There is more overhead communication that is required when syncing up live VM data that avoids any downtime to end users.   When the VM is in a down state, there is less overhead communication since VMware is just coping the VM VMDK data to the second array.  More overhead communication adds to the latency issue.


Since you can also migrate to other host without issue, which I assume is not using long distance fiber, further speaks to latency between Site A and B and ESX vMotion and the P2000 storage are correctly working.   


Also, you mentioned that disabling VAAI allowed live migration to work.  This is probably due to the change VMware Datamover method.  VAAI is only beneficial when used on the same array, but there are different Datamover within ESX when moving data.   Datamover will first try the hardware offload method, but will fail due to two array scenario in your environment.  Then tries kernel level operation called FS3DM, then FSDM which works at the application level.  If the block size is not the same between arrays and datastore, then it will use the FSDM Datamover method.  Perhaps by turning off VAAI this allowed VMware to move the VM data more efficiently.


I would check your transfer speeds between Site A and B to see if you're dropping frames, since the error log shows NMP errors.

Failed: H:0x2 D:0x2 P:0x0 Possible sense data&colon; 0xa 0xd 0x2. Act 


 See VMware artical for details: