<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Question About Compression in Array Performance and Data Protection</title>
    <link>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982678#M258</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;Please excuse my ignorance if I don't understand how compression works with Nimble. So we have&amp;nbsp; a CS240G which is 24TB RAW and about 15.5TB usable after nimble's triple parity raid. According to the image below I'm getting 1.35X compression with about 13.56TB used and I'm already close to 90% capacity. With 1.35X compression, shouldn't I get about 20TB usuable&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;IMG __jive_id="1897" alt="nimble compression.JPG" class="jive-image image-1" src="https://community.hpe.com/legacyfs/online/1897_nimble compression.JPG" style="height: auto;" /&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 19 Sep 2014 05:40:35 GMT</pubDate>
    <dc:creator>hqd201120</dc:creator>
    <dc:date>2014-09-19T05:40:35Z</dc:date>
    <item>
      <title>Question About Compression</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982678#M258</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;Please excuse my ignorance if I don't understand how compression works with Nimble. So we have&amp;nbsp; a CS240G which is 24TB RAW and about 15.5TB usable after nimble's triple parity raid. According to the image below I'm getting 1.35X compression with about 13.56TB used and I'm already close to 90% capacity. With 1.35X compression, shouldn't I get about 20TB usuable&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;IMG __jive_id="1897" alt="nimble compression.JPG" class="jive-image image-1" src="https://community.hpe.com/legacyfs/online/1897_nimble compression.JPG" style="height: auto;" /&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 19 Sep 2014 05:40:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982678#M258</guid>
      <dc:creator>hqd201120</dc:creator>
      <dc:date>2014-09-19T05:40:35Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982679#M259</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Hien,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The Nimble UI works on TiB (1024) rather than TB (1000) - so an array which shows 15.5TB usable is actually 17.04TB - there's an open bug to fix the terminology as it's a little confusing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The way we're showing data usage on your screen should be read as follows:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Volume Usage - how much data you're actually storing on the array after any data reduction such as compression &amp;amp; pattern matching. &lt;/P&gt;&lt;P&gt;Primary Compression - how much data space you've saved through compression so far.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Therefore as it stands you've written a total of 19.2TiB (21.1TB) of data to the array. The array has managed to save 5.64TiB (6.2TB) of space through LZ4 compression, meaning that you are storing 13.56TiB (14.9TB) on the array itself.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 19 Sep 2014 06:19:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982679#M259</guid>
      <dc:creator>Nick_Dyer</dc:creator>
      <dc:date>2014-09-19T06:19:53Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982680#M260</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Nick,&lt;/P&gt;&lt;P&gt;When I calculate the datastore usage on my esxi hosts though, I still only come up with about 14.3TB. Is there something else I'm missing? Thanks for taking the time to respond.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 19 Sep 2014 06:26:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982680#M260</guid>
      <dc:creator>hqd201120</dc:creator>
      <dc:date>2014-09-19T06:26:02Z</dc:date>
    </item>
    <item>
      <title>Re: Question about compression</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982681#M261</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Hien,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This is most likely where you have either created then deleted or SVmotioned a VM - VMware will report that the volume usage has shrunk, whereas on the array all we see are used blocks. What you should look to do is run SCSI UNMAP within VMware to reclaim those dead blocks within the volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here's a good blog on SCSI UNMAP for your information: &lt;A href="https://community.hpe.com/nimble-blogpost/1102"&gt;Space Reclamation in vSphere 5.5 with Nimble Storage&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 19 Sep 2014 10:52:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/question-about-compression/m-p/6982681#M261</guid>
      <dc:creator>Nick_Dyer</dc:creator>
      <dc:date>2014-09-19T10:52:21Z</dc:date>
    </item>
  </channel>
</rss>

