StoreVirtual Storage
1753765 Members
5734 Online
108799 Solutions
New Discussion

Re: StoreVirtual 4335 - Differentiate Useable space capacity between VMware and HPE CMC

 
TriHD
Occasional Visitor

StoreVirtual 4335 - Differentiate Useable space capacity between VMware and HPE CMC

Hi there


First of all, thank you for reading my post and I am stuck with explaining an issue to my customers. It is about the free space (usable space) displaying in VMware and in CMC is different.

My customers asks me what is it mean as "Full, save 7.33GB if thin"? My opinion is this number is the free space of the volume which also means it has only 7.33GB  remaining to store new data. Is it true?

 

The second question is "The capacity of Tier0 and T1 of any volume is exactly the written data. Is it right?

 

And last question is , with the volume Citrix as in the picture, if I check the data capacity in VMware, it would be free up to 96GB. But in CMC, I see the number "Full, save 7.33GB if thin" and if I count the Tier0 to Tier1, the total number is also less than 96GB which is shown in VMware. My customer wants to know what makes this difference so much large., approx 90GB?

 

Please help. Thank you so much.

 

1 REPLY 1
oikjn
Honored Contributor

Re: StoreVirtual 4335 - Differentiate Useable space capacity between VMware and HPE CMC

When you set LUNs to be FULL, CMC allocates 100% of the specified space for that LUN on creation so it immediately reports that space as consumed in CMC.  Since these LUNs are NR-10, that means they consume 2x their reported LUN size within CMC.  If you change the allocation from FULL to THIN, CMC is telling you that there is currently X.XX GB per LUN that has never been written to on that LUN since its creation and could be reclaimed in CMC as available for use elsewhere.

I always suggest you use THIN allocation even if you do not plan to over-provision because it allows you to create snapshots if only temporary for replication or to facilitate backups.  I would suggest you convert the LUNs to THIN even if you don't think saving the space is a big benefit.

As for the consumption reports for each tier, since you have tiered storage, CMC allocates the space for the LUN across both tiers and moves them up and down as it sees fit.  I don't know how much extra space you have in the cluster to allow this to freely work which might be another good reason to make the LUNs thin.  When you have the LUNs setup as THICK the values in T0 and T1 are exactly the size of space provisioned for that LUN, when you switch to THIN provisioning, those values become the exact size of data written to the LUN.  Keep in mind since this is NR10, every 1GB written translates into 2GB of consumed space in CMC.

Once you make the LUNs thin, if the initiators are capable of reporting back to the SAN using TRIM, you may see even more free space reclaimed by CMC as each initiator reports back the erased storage blocks.  My guess is that those LUNs are not near as full as they report.  You should be able to check in the OS to see how much free space is available on each iSCSI drive and if you add that space up for buth LUNs and then multiply it by 2x, that is actually the ammount of space you will reclaim on the SAN after thin provisioning AND space reclaimation are enabled.

I think I wrote it above, the the answer for your customer around the 90GB descrepincy is a result of the FULL provisioned LUN not getting TRIM commands to report erased data.  that 90GB differece represents 90GB of area on the LUN that was written to at one point in time and then later deleted.  If you do convert to THIN provisioning and enable space reclaimation, that full 96GB will eventually get reclaimed which will actually clear up 192GB of consumed space on CMC.