<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Volume usage discrepancies in Array Performance and Data Protection</title>
    <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985659#M952</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Some colleagues and I were just discussing this issue this morning too.&lt;/P&gt;&lt;P&gt;Our issue is how do you keep vmware from complaining that your data store is full?&lt;/P&gt;&lt;P&gt;I have a 2TB data store that is on disk using 781GB but vmware is complaining that there is only 464GB free.&lt;/P&gt;&lt;P&gt;I would like to quite vmware rather than expanding the volume/data store to quiet the alarm.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 19 Sep 2013 16:21:19 GMT</pubDate>
    <dc:creator>crobinso129</dc:creator>
    <dc:date>2013-09-19T16:21:19Z</dc:date>
    <item>
      <title>Volume Usage Discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985654#M947</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a Nimble CS220 running software 1.4.6.0-39995-opt. I have vCenter Server Essentials Plus with 3 hosts connected via iSCSI to the Nimble array. I am seeing some discrepancies on volume usage being reported between vSphere client and the Nimble web management interface.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Information in vSphere client&lt;/P&gt;&lt;P&gt;Capacity = 200GB&lt;/P&gt;&lt;P&gt;Provisioned Space = 159GB&lt;/P&gt;&lt;P&gt;Free Space = 40GB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This is correct I have 3 guests on the volume each is 50GB Thick Provision Lazy Zeroed with a 2GB vswp file&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Nimble web management reports&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Size = 200GB&lt;/P&gt;&lt;P&gt;Used = 42GB&lt;/P&gt;&lt;P&gt;Snapshot = 2GB&lt;/P&gt;&lt;P&gt;Total used = 44GB&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This is a huge difference and is making capacity planning difficult. Have any other users seen this in their infrastructure? If so why is the reporting so off between the 2. I have other volumes that are similar in the reporting. Thanks&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 17 Sep 2013 18:14:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985654#M947</guid>
      <dc:creator>scott_edwards1</dc:creator>
      <dc:date>2013-09-17T18:14:32Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985655#M948</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Scott,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The stats you see on the array are a result of the inline compression and other space efficiency techniques (such as zero-block unmap) that Nimble Storage arrays utilize to improve capacity efficiency.&amp;nbsp; The provisioned space from Nimble to VMware is logical representation of storage and what you are seeing on the web management reports on Nimble is the actual space consumption on the Nimble array itself so your 159GB workload is only consuming 44GB inclusive of snapshots!&amp;nbsp; With block storage array that employs compression and thin provisioning techniques, you will need to ensure that you manage both logical capacity from the VMware and the actual space consumption the Nimble array.&amp;nbsp; Not a bad tradeoff to save 70% of the capacity!&amp;nbsp; You can certainly use Volume Reserves and Volume Warning to help account for volume capacity consumption on the array&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Eddie&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 18 Sep 2013 05:47:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985655#M948</guid>
      <dc:creator>etang40</dc:creator>
      <dc:date>2013-09-18T05:47:12Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985656#M949</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I can attest what Eddie said. I have Hyper-V in my company, the size for the CSV volumes are 4.5T but it only shows 1.8T being used on Nimble.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 13:25:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985656#M949</guid>
      <dc:creator>jliu79</dc:creator>
      <dc:date>2013-09-19T13:25:22Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985657#M950</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks to both of you for your feedback. Based on this can you over allocate storage? In the example above only 44GB is used of the 200GB so I could potentially add additional guests in thin provision format which would consume more than 200GB as long as the compression was keeping the usage under the max limit?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 13:40:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985657#M950</guid>
      <dc:creator>scott_edwards1</dc:creator>
      <dc:date>2013-09-19T13:40:29Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985658#M951</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;That's right. That's the beauty of thin provisioning and compression.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 15:37:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985658#M951</guid>
      <dc:creator>jliu79</dc:creator>
      <dc:date>2013-09-19T15:37:05Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985659#M952</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Some colleagues and I were just discussing this issue this morning too.&lt;/P&gt;&lt;P&gt;Our issue is how do you keep vmware from complaining that your data store is full?&lt;/P&gt;&lt;P&gt;I have a 2TB data store that is on disk using 781GB but vmware is complaining that there is only 464GB free.&lt;/P&gt;&lt;P&gt;I would like to quite vmware rather than expanding the volume/data store to quiet the alarm.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 16:21:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985659#M952</guid>
      <dc:creator>crobinso129</dc:creator>
      <dc:date>2013-09-19T16:21:19Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985660#M953</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;That's what I have been kicking around as well. I don't want to disable all the alarms for vmware as lack of disk space will have some repercussions. Snapshot usage will also need to be a consideration. I have moved away from vmware snapshots for the most part and am utilizing Nimble's snapshot function. I will take a manual vmware snapshot of a vm before patching or a major change for a quick restore if needed but usually delete it within a couple of hours to a day or 2 max. I am testing volume usage at around 75% from the vmware side and so far it seems to be ok.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 17:16:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985660#M953</guid>
      <dc:creator>scott_edwards1</dc:creator>
      <dc:date>2013-09-19T17:16:58Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985661#M954</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You should just increase the volume size. It is thin provision anyway, even if you give another 1T to the data store it will still just use 781G on Nimble, and it will also quiet the VMware alerts.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 17:21:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985661#M954</guid>
      <dc:creator>jliu79</dc:creator>
      <dc:date>2013-09-19T17:21:43Z</dc:date>
    </item>
    <item>
      <title>Re: Volume usage discrepancies</title>
      <link>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985662#M955</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Jason is correct.&amp;nbsp; You will want to manage the space from the VMware logical perspective (200GB in Scott's scenario).&amp;nbsp; If you store 200GB in that VM datastore, VMware will stop writes to the volume even though on the Nimble array the space consumed is perhaps only 50% of that as a result of compression.&amp;nbsp; Increase the volume size to ensure that there is enough logical space.&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;An analogy could be the scenario where you have a credit card with a $500 limit.&amp;nbsp; For every $100 you spend, the bank will rebate you $20 to your bank account (we are talking hypotheticals here!).&amp;nbsp; You spend up to the $500 limit and the credit card company stops further transactions yet in your bank you actually have +$100.&amp;nbsp; You can call the credit card company and tell them how you have the money in the bank and request a limit increase (volume size increase so you can handle more workload).&amp;nbsp; At the end of the day, I'd be pretty happy to have the $100 rebate that I can use for buying the latest gadget &lt;IMG src="https://community.hpe.com/legacyfs/online/emoticons/happy.png" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thin provisioning and compression techniques make for more efficient use of space but you do need to manage it accordingly from both the hypervisor and on the storage system.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-Eddie&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 19 Sep 2013 18:19:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-performance-and-data/volume-usage-discrepancies/m-p/6985662#M955</guid>
      <dc:creator>etang40</dc:creator>
      <dc:date>2013-09-19T18:19:43Z</dc:date>
    </item>
  </channel>
</rss>

