Array Performance and Data Protection
1751695 Members
4956 Online
108781 Solutions
New Discussion юеВ

Re: vSphere RDM used\compressed space representation on Nimble

 
SOLVED
Go to solution
dmitry_grab
New Member

vSphere RDM used\compressed space representation on Nimble

Could somebody please explain the meaning of the Compression and the Space Saved in the examples below?

In Example #1 the Used space on the SAN end is much larger than the actual used space in Windows (1.08 TB > 830 GB). Plus, there is mysterious Saved 527.41 GB due to compression.

Example #2 is less extreme (the Used space is more or less adequately displayed on both ends), but once again, there is also "Savings" of 2.35 TB.

Windows Disk properties reflect the real usage picture on both. Both RDMs are identically configured: Basic NTFS Volume, no Windows compression, Physical RDM, iSCSI, vSphere v5.5. Nimble CS460G-X2, 2.3.18.0-394708-opt

Thank you

RDM space usage Example #1


Windows Disk Properties

---

Capacity 1.1 TB

Used Space 830 GB

Free Space 295 GB

Nimble CS460G-X2

---

VOLUME SPACE  

Size 1.1 TB

Used 1.08 TB

Reserve 0 B

Quota 1.1 TB

Primary Compression 1.48X

Primary Space Saved 527.41 GB

SNAPSHOT SPACE  

Used 0 B

Reserve 0 B

Quota Unlimited

Backup Compression N/A

Number of Snapshots 0

RDM space usage Example #2

 

Windows Disk Properties

---

Capacity 8.99 TB

Used Space 6.71 GB

Free Space 2.28 GB

Nimble CS460G-X2

---

VOLUME SPACE  

Size 9.0 TB

Used 6.79 TB

Reserve 0 B

Quota 9.0 TB

Primary Compression 1.35X

Primary Space Saved 2.35 TB

SNAPSHOT SPACE  

Used 105.57 MB

Reserve 0 B

Quota Unlimited

Backup Compression 1.34X

Number of Snapshots 0

4 REPLIES 4
mblumberg16
Respected Contributor
Solution

Re: vSphere RDM used\compressed space representation on Nimble

Hi Dmitry,

Primary Compression - is what ratio the data is being compressed at.

Primary Space Saved - this is how much space is being saved due to compression.

As for the space usage discrepancy:

It's not unusual to see different sizes between a host and a storage vendor, it happens between all integrations and all storage arrays.

Space usage monitoring on a SAN is much different from how space usage is monitored within a hostтАЩs file system. A SAN reports free space in terms of how many blocks have not been written to (these are called тАЬclean blocksтАЭ). The number of clean blocks is multiplied by the block size in order to provide a more user-friendly space usage figure.

In contrast, host file systems report free space in terms of the total capacity of a datastore or volume, less the sum of all files within the file system. When a file is deleted, free space is instantly increased within the host file system. However, in the majority of cases deleting files on the host does not automatically notify the SAN that those blocks can be freed up, since the physical block remains in place after the deletion тАУ only the file system metadata is updated. This leads to a discrepancy between how much free space is being reported within the file system and how much free space is being reported on the SAN. This is not limited only to Nimble arrays тАУ all block-storage SANs which utilize thin provisioning will have the same space discrepancy issue.

To work around this issue, Windows, VMware and Linux file systems have implemented a feature which will notify the SAN to free up blocks that are no longer in use by the host file system. This feature is called block unmap, or SCSI unmap.

As this is RDM and NTFS partition you should use sdelete or if using Server2012 defrag also includes an optimize feature which sends unmap commands to the back end storage.

Sdelete is available from the following location:

http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx

The tool must be run from a Windows Command Prompt with the тАШ-zтАЩ switch, e.g.:

sdelete тАУz <drive letter>

We have a great KB about block reclaim on infosight: Nimble Storage InfoSight - KB-000065

Please feel free to let me know if you have any questions at all.

Thanks,

Moshe.

dmitry_grab
New Member

Re: vSphere RDM used\compressed space representation on Nimble

Hi Moshe,

Thank you for your detailed explanation. This makes perfect sense, especially considering the characteristics of the file servers in the discussed examples. I do see good correlation between file activity and the amount of unclaimed blocks on each one of the drives. I also recall now the concept of the block un-map, but I probably didn't attach importance to it and didn't realize what the effect can be in extremely active file servers.

I have just one last question on the technique you recommended. Do you know how safe sdelete -z is? Does it affect performance / take time to complete?

Thank you,

Dmitry

mblumberg16
Respected Contributor

Re: vSphere RDM used\compressed space representation on Nimble

Hi Dmitry,

sdelete is safe to use, we've been using it for years in the industry, As for the the performance impact, yes you're correct there will be a burst of IOPs to the Nimble array and depended on the nature of load this array is under you should consider when to issue it.

We often see users scheduling the task at a low activity period / weekend / night time.

You can set it as a scheduled task to be invoked when you select to do so.

As for the completion time, I have no idea, it depends on the amount of blocks to claim back.

Please feel free to let me know if you have any questions at all.

Thanks,

Moshe.

dmitry_grab
New Member

Re: vSphere RDM used\compressed space representation on Nimble

Thank you once again, Moshe. My questions are now fully answered and it's time to plan the action:)

Best Regards,

Dmitry