MSA Storage
Showing results for 
Search instead for 
Did you mean: 

MSA 2040 - VMWARE 6.7 - ReadCache

Go to solution

MSA 2040 - VMWARE 6.7 - ReadCache

Hi There

Just in case anyone else has this trouble.  I added a read-cache (SSD disk) to Pool A on my MSA 2040.  Pool A held the datastore for my VMWARE 6.7 environment.  Pool A had been up and running for months.  I decided to add the read cache to improve read perfromance (as best practice suggests).

As a test I rebooted one of the esxi hosts after doing this.  The datastore would no longer mount. The vmkernel log showed the error "Invalid physDiskBlockSize 512".

After much faffing about - I decided to remove the read-cache - the Datastore mounted correctly.



Re: MSA 2040 - VMWARE 6.7 - ReadCache

TigerRool10-1 , 

I understand your concern. However , Read cahce is an extension of controller cache , addition or deletion of read cache should not affect data access. 
Also , VMware 6.7 is not supported on MSA 2040. 

Please refer to the below link for the list of supported OS

I am an HPE Employee

Accept or Kudo

Re: MSA 2040 - VMWARE 6.7 - ReadCache

Hi There and thanks for replying

The firmware mentioned,GL225R003, was release in December 2017. (We are using that version)

ESXi 6.7 was not released until April it would have been impossible for the MSA firmware doucment to be aware of esxi 6.7

Having said that would you think that a firmware upgrade is required ?  GL225R003 is the latest version.




Re: MSA 2040 - VMWARE 6.7 - ReadCache

Status Update - I added a new disk group to Pool A.

Pool A already holds the two esxi DataStores - I rebooted one of my ESXI hosts - Bang!  the DataStores will no longer mount.

Looking in vmkernel.log I see the following message

2019-03-21T16:09:32.361Z cpu7:2097364)WARNING: Vol3: 3102: XXXXXX1/5c91215c-80529b43-df3a-5cb901cf9580: Invalid physDiskBlockSize 512

2019-03-21T16:09:22.301Z cpu2:2097364)FSS: 6092: No FS driver claimed device '5c91218e-9ca78e9c-4f5d-5cb901cf9580': No filesystem on the device


Is this problem caused by an incompatibility between the MSA Firmware GL225R003 and ESXi 6.7 ?


Re: MSA 2040 - VMWARE 6.7 - ReadCache


This is nothing to do with MSA I think. I just searched in Google and found many links where I see after reboot of ESX6.7 Datastore missing or got corrupted. Looks like some bug in ESX6.7


Hope this helps!

I am an HPE employee

If you feel this was helpful please click the KUDOS! thumb below!


I work for HPE
Accept or Kudo

Re: MSA 2040 - VMWARE 6.7 - ReadCache


I'll raise a support call and see where it goes,.


Re: MSA 2040 - VMWARE 6.7 - ReadCache

Just an update on where I'm at with this!

I downgraded one of the ESXi hosts from 6.7 to 6.5 Build 5310538.

I then modified Pool A on the MSA 2040. I added a second Disk Group to it. I then rebooted the esxi 6.5 host. (The vmware datastores for my esxi environemtn are located on Pool A.)

On reboot the datastores loaded correctly in ESXi 6.5! Dang!

This means that the problem is ESXi 6.7. Its not compatible the MSA 2040 GL225R003 firmware. Downgrading the esxi hosts to 6.5 is going to create a huge load of problems for the existing VMs (all built in the 6.7 environment), VM hardware version, vmotion, EVC mode....a right headache!


Re: MSA 2040 - VMWARE 6.7 - ReadCache

Right - this has been solved - Many thanks to Stephen Wagner and his brilliant site :

This is what I found.

ESXi 6.7 does not like it if you change the configuration of a virtual pool after it has been presented (presented and mounted by ESXi)

For example, if you are using Virtual Disk Pools which contain volumes that are presented to ESX 6.7 as Datastores and at a later date you change that pool by adding a disk group to it or a read cache to it - you will probably come across the problem that I have doucmented here (datastore will not mount on reboot of esxi 6.7 after virtual disk pool changes)

The only way you can avoid this problem is if all your disks have the same Sector Format.  All the disks in the MSA should be 512n or 512e (is there any other format?)

Remember this only applies to ESXi 6.7 (In my testing 6.5 is not affected)