<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: vSphere RDM used\compressed space representation on Nimble in Array Performance and Data Protection</title>
    <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983134#M459</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Dmitry,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Primary Compression - is what ratio the data is being compressed at.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Primary Space Saved - this is how much space is being saved due to compression.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As for the space usage discrepancy: &lt;/P&gt;&lt;P&gt;It's not unusual to see different sizes between a host and a storage vendor, it happens between all integrations and all storage arrays.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Space usage monitoring on a SAN is much different from how space usage is monitored within a host’s file system. A SAN reports free space in terms of how many blocks have not been written to (these are called “clean blocks”). The number of clean blocks is multiplied by the block size in order to provide a more user-friendly space usage figure. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In contrast, host file systems report free space in terms of the total capacity of a datastore or volume, less the sum of all files within the file system. When a file is deleted, free space is instantly increased within the host file system. However, in the majority of cases deleting files on the host does not automatically notify the SAN that those blocks can be freed up, since the physical block remains in place after the deletion – only the file system metadata is updated. This leads to a discrepancy between how much free space is being reported within the file system and how much free space is being reported on the SAN. This is not limited only to Nimble arrays – all block-storage SANs which utilize thin provisioning will have the same space discrepancy issue. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To work around this issue, Windows, VMware and Linux file systems have implemented a feature which will notify the SAN to free up blocks that are no longer in use by the host file system. This feature is called block unmap, or SCSI unmap. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As this is RDM and NTFS partition you should use sdelete or if using Server2012 defrag also includes an optimize feature which sends unmap commands to the back end storage.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sdelete is available from the following location: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx" rel="nofollow" target="_blank"&gt;http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx&lt;/A&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The tool must be run from a Windows Command Prompt with the ‘-z’ switch, e.g.: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;sdelete –z &amp;lt;drive letter&amp;gt; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have a great KB about block reclaim on infosight: &lt;A href="https://infosight.nimblestorage.com/InfoSight/media/kb/active/htr1455131720138.whz/index.html" title="https://infosight.nimblestorage.com/InfoSight/media/kb/active/htr1455131720138.whz/index.html"&gt;Nimble Storage InfoSight - KB-000065&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please feel free to let me know if you have any questions at all.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Moshe.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 13 Jan 2017 16:10:34 GMT</pubDate>
    <dc:creator>mblumberg16</dc:creator>
    <dc:date>2017-01-13T16:10:34Z</dc:date>
    <item>
      <title>vSphere RDM used\compressed space representation on Nimble</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983133#M458</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Could somebody please explain the meaning of the Compression and the Space Saved in the examples below?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In &lt;STRONG&gt;Example #1&lt;/STRONG&gt; the Used space on the SAN end is much larger than the actual used space in Windows (1.08 TB &amp;gt; 830 GB). Plus, there is mysterious Saved 527.41 GB due to compression.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Example #2&lt;/STRONG&gt; is less extreme (the Used space is more or less adequately displayed on both ends), but once again, there is also "Savings" of 2.35 TB.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 13.3333px;"&gt;Windows Disk properties reflect the real usage picture on both. &lt;/SPAN&gt;Both RDMs are identically configured: Basic &lt;SPAN style="font-size: 13.3333px;"&gt;NTFS Volume, &lt;SPAN style="font-size: 13.3333px;"&gt;no Windows compression, &lt;/SPAN&gt;Physical RDM, iSCSI, vSphere v5.5. &lt;SPAN style="font-size: 13.3333px;"&gt;Nimble CS460G-X2&lt;/SPAN&gt;, 2.3.18.0-394708-opt&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="text-decoration: underline;"&gt;RDM space usage Example #1&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Windows Disk Properties&lt;/P&gt;&lt;P&gt;---&lt;/P&gt;&lt;P&gt;Capacity 1.1 TB&lt;/P&gt;&lt;P&gt;Used Space 830 GB&lt;/P&gt;&lt;P&gt;Free Space 295 GB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Nimble CS460G-X2&lt;/P&gt;&lt;P&gt;---&lt;/P&gt;&lt;P&gt;VOLUME SPACE&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;Size 1.1 TB &lt;/P&gt;&lt;P&gt;Used 1.08 TB &lt;/P&gt;&lt;P&gt;Reserve 0 B &lt;/P&gt;&lt;P&gt;Quota 1.1 TB &lt;/P&gt;&lt;P&gt;Primary Compression 1.48X &lt;/P&gt;&lt;P&gt;Primary Space Saved 527.41 GB &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;SNAPSHOT SPACE&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;Used 0 B &lt;/P&gt;&lt;P&gt;Reserve 0 B &lt;/P&gt;&lt;P&gt;Quota Unlimited &lt;/P&gt;&lt;P&gt;Backup Compression N/A &lt;/P&gt;&lt;P&gt;Number of Snapshots 0 &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="text-decoration: underline;"&gt;RDM space usage Example #2&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &lt;/P&gt;&lt;P&gt;Windows Disk Properties&lt;/P&gt;&lt;P&gt;---&lt;/P&gt;&lt;P&gt;Capacity 8.99 TB&lt;/P&gt;&lt;P&gt;Used Space 6.71 GB&lt;/P&gt;&lt;P&gt;Free Space 2.28 GB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Nimble CS460G-X2&lt;/P&gt;&lt;P&gt;---&lt;/P&gt;&lt;P&gt;VOLUME SPACE&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;Size 9.0 TB &lt;/P&gt;&lt;P&gt;Used 6.79 TB &lt;/P&gt;&lt;P&gt;Reserve 0 B &lt;/P&gt;&lt;P&gt;Quota 9.0 TB &lt;/P&gt;&lt;P&gt;Primary Compression 1.35X &lt;/P&gt;&lt;P&gt;Primary Space Saved 2.35 TB &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;SNAPSHOT SPACE&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;Used 105.57 MB &lt;/P&gt;&lt;P&gt;Reserve 0 B &lt;/P&gt;&lt;P&gt;Quota Unlimited &lt;/P&gt;&lt;P&gt;Backup Compression 1.34X &lt;/P&gt;&lt;P&gt;Number of Snapshots 0 &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 11 Jan 2017 22:37:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983133#M458</guid>
      <dc:creator>dmitry_grab</dc:creator>
      <dc:date>2017-01-11T22:37:43Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere RDM used\compressed space representation on Nimble</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983134#M459</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Dmitry,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Primary Compression - is what ratio the data is being compressed at.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Primary Space Saved - this is how much space is being saved due to compression.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As for the space usage discrepancy: &lt;/P&gt;&lt;P&gt;It's not unusual to see different sizes between a host and a storage vendor, it happens between all integrations and all storage arrays.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Space usage monitoring on a SAN is much different from how space usage is monitored within a host’s file system. A SAN reports free space in terms of how many blocks have not been written to (these are called “clean blocks”). The number of clean blocks is multiplied by the block size in order to provide a more user-friendly space usage figure. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In contrast, host file systems report free space in terms of the total capacity of a datastore or volume, less the sum of all files within the file system. When a file is deleted, free space is instantly increased within the host file system. However, in the majority of cases deleting files on the host does not automatically notify the SAN that those blocks can be freed up, since the physical block remains in place after the deletion – only the file system metadata is updated. This leads to a discrepancy between how much free space is being reported within the file system and how much free space is being reported on the SAN. This is not limited only to Nimble arrays – all block-storage SANs which utilize thin provisioning will have the same space discrepancy issue. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To work around this issue, Windows, VMware and Linux file systems have implemented a feature which will notify the SAN to free up blocks that are no longer in use by the host file system. This feature is called block unmap, or SCSI unmap. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As this is RDM and NTFS partition you should use sdelete or if using Server2012 defrag also includes an optimize feature which sends unmap commands to the back end storage.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sdelete is available from the following location: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx" rel="nofollow" target="_blank"&gt;http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx&lt;/A&gt;&lt;SPAN&gt; &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The tool must be run from a Windows Command Prompt with the ‘-z’ switch, e.g.: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;sdelete –z &amp;lt;drive letter&amp;gt; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have a great KB about block reclaim on infosight: &lt;A href="https://infosight.nimblestorage.com/InfoSight/media/kb/active/htr1455131720138.whz/index.html" title="https://infosight.nimblestorage.com/InfoSight/media/kb/active/htr1455131720138.whz/index.html"&gt;Nimble Storage InfoSight - KB-000065&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please feel free to let me know if you have any questions at all.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Moshe.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Jan 2017 16:10:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983134#M459</guid>
      <dc:creator>mblumberg16</dc:creator>
      <dc:date>2017-01-13T16:10:34Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere RDM used\compressed space representation on Nimble</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983135#M460</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Moshe,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for your detailed explanation. This makes perfect sense, especially considering the characteristics of the file servers in the discussed examples. I do see good correlation between file activity and the amount of unclaimed blocks on each one of the drives. I also recall now the concept of the block un-map, but I probably didn't attach importance to it and didn't realize what the effect can be in extremely active file servers.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have just one last question on the technique you recommended. Do you know how safe sdelete -z is? Does it affect performance / take time to complete?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Dmitry&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Jan 2017 17:47:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983135#M460</guid>
      <dc:creator>dmitry_grab</dc:creator>
      <dc:date>2017-01-13T17:47:41Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere RDM used\compressed space representation on Nimble</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983136#M461</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Dmitry,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;sdelete is safe to use, we've been using it for years in the industry, As for the the performance impact, yes you're correct there will be a burst of IOPs to the Nimble array and depended on the nature of load this array is under you should consider when to issue it.&lt;/P&gt;&lt;P&gt;We often see users scheduling the task at a low activity period / weekend / night time.&lt;/P&gt;&lt;P&gt;You can set it as a scheduled task to be invoked when you select to do so.&lt;/P&gt;&lt;P&gt;As for the completion time, I have no idea, it depends on the amount of blocks to claim back.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please feel free to let me know if you have any questions at all.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Moshe.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Jan 2017 17:55:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983136#M461</guid>
      <dc:creator>mblumberg16</dc:creator>
      <dc:date>2017-01-13T17:55:03Z</dc:date>
    </item>
    <item>
      <title>Re: vSphere RDM used\compressed space representation on Nimble</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983137#M462</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you once again, Moshe. My questions are now fully answered and it's time to plan the action:)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Dmitry&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Jan 2017 18:16:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/vsphere-rdm-used-compressed-space-representation-on-nimble/m-p/6983137#M462</guid>
      <dc:creator>dmitry_grab</dc:creator>
      <dc:date>2017-01-13T18:16:11Z</dc:date>
    </item>
  </channel>
</rss>

