Online Expert Day - HPE Data Storage - Live Now
April 24/25 - Online Expert Day - HPE Data Storage - Live Now
Read more
HPE Storage Tech Insiders
Showing results for 
Search instead for 
Did you mean: 

NimbleOS 3 - VMware Copy Offload / XCOPY VAAI Primitive


YES - you read that right - NimbleOS 3 brings support for the final (and somewhat elusive) VMware VAAI feature - Copy Offload - also known as XCOPY!

First off a quick refresher on the Copy Offload feature.

Copy Offload is a primitive that forms part of the VAAI integration introduced by VMware a few years ago (alongside other primitives such as Atomic Test & Set, and SCSI UNMAP). It allows for offloading virtual disk copying/migration operations away from the network and servers and to keep it within the storage array. So operations such as "Storage vMotion", "Clone or Deploy from Template" are now operations kept within the same storage system.

This is because moving something from location A to location B within the storage platform natively is much faster and less resource intensive than having VMware issue thousands of reads and write IO over the network and perform host copy processes, and therefore reduces CPU overhead on the VMware host and reduces network traffic. This also leads to faster vMotions, clones and VM deployments.


How do you know what VAAI primitives are in use from your SAN vendor? Run the following command in a shell on one of your VMware hosts:

esxcli storage core device vaai status get

Here's the output from a volume presented from an array running NimbleOS 2.3. Notice that Clone Status == "unsupported".

Screen Shot 2016-08-31 at 13.18.29.png

Here's the output from a volume presented from an array running NimbleOS 3. Notice that all are now "supported"

Screen Shot 2016-08-16 at 13.46.43.png

By default, when Copy Offload is enabled all transfers are set to run at 4MB in size. It's possible to further enhance this by changing the "MaxHWTransferSize" setting to 16MB.

Adjusting the transfer side on the hosts to 16MB can improve performance significantly in some cases, especially where fewer than 6 concurrent Copy Offload operations are being performed. A larger transfer size results in fewer concurrent Copy Offload I/O using up the host queue depth... and as a result, more queue depth is available for non-Copy Offload host I/O. Your mileage may vary, of course, but worthwhile knowing.

To change the transfer size, the following command needs to be issued on all attached VMware hosts:

esxcfg-advcfg -s 16384 /DataMover/MaxHWTransferSize

And to verify if the change was successful (or to check what the current setting is):

esxcfg-advcfg -g /DataMover/MaxHWTransferSize

Screen Shot 2016-08-31 at 14.24.22.png

That's it! Nice and easy. Enjoy!

About the Author



Great news, Nick! Does this affect storage vMotions (i.e. does this work across volumes/datastores or is it limited to operations within a datastore)? Does this leverage Nimble's pre-duplication features like clones do where the copied data wouldn't consume any more space? If so, this could save quite a bit of space!


Hi Jonathan,

Good news - this will work across volumes/datastores! So things such as Storage vMotions absolutely will benefit from this. XCOPY alone doesn't leverage things such as SmartCopy Clones though, it will still create full copies of the data.

However if you were to use our VVOL implementation within vSphere 6, that WILL natively use our clone technology when performing actions such as clone/deploy from template


Ah ok. We're probably moving to vSphere 6 by year's end so we'll play with that. Do you know of many customers using VVOLs in production or is it still seen as beta?


It's fully supported for production use with Nimble and works very well. There are a couple of issues around replication and restores of replicated VMs (ie VMware doesn't have the ability to support it just yet) but that should be resolved through the next release of VMware code fairly shortly.

According to Infosight, there are >100 customers using VVOLs in production use in some form... and bear in mind that NimbleOS 3 (the code that VVOLs is part of) only went GA in the last 24 hours


Good point on the infancy of the GA code. I suppose most or all of those 100 customers using VVols would be all flash users.

Anyway, thanks for the great info and very quick replies to my questions.


With regards to replication, I want to clarify that Nimble absolutely supports and is able to accomplish replication of VVols as well as recovery - either by replicating back to the original array and attaching to the original vCenter instance, or through promotion of the downstream replica and attaching it to any vCenter with which the downstream array has it's VASA provider registered.  Nimble supports this today, without needing to wait for anything further from VMware.  I recently presented a session at VMworld covering exactly this.  Expect to see further coverage via blog and/or webinar soon.

To Nick's point, we expect to see further and closer integration with VMware in the future when their code is ready to support these workflows natively.



Apr 24 - 25, 2018
Expert Days - 2018
Visit this forum and get the schedules for online HPE Expert Days where you can talk to HPE product experts, R&D and support team members and get answ...
Read more
June 19 - 21
Las Vegas, NV
HPE Discover 2018 Las Vegas
Visit this forum and learn about all things Discover 2018 in Las Vegas, Nevada, June 19 - 21, 2018.
Read more
View all