<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Allocated snapshot space exceeded the configured limit in HPE MSA Storage</title>
    <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071200#M13372</link>
    <description>&lt;P&gt;The initial replication of 4Tb LUN completed successfully yesterday.&lt;/P&gt;&lt;P&gt;I ran a scheduled subsequent replication to cathc up on all the changes.&lt;/P&gt;&lt;P&gt;After 20 mins it logged a warning message&lt;/P&gt;&lt;P&gt;Allocated snapshot space exceeded the high threshold of 99%. (pool: A, SN: 00c0ff3cf170000017c7a15c01000000) (snapshot space used: 509347 of 514491 pages, or 99% of the snapshot space)&lt;/P&gt;&lt;P&gt;EVENT ID:#A4385&lt;/P&gt;&lt;P&gt;EVENT CODE:571&lt;/P&gt;&lt;P&gt;EVENT SEVERITY:Warning&lt;/P&gt;&lt;P&gt;EVENT TIME:2019-11-27 18:21:07&lt;/P&gt;&lt;P&gt;Then seconds later&lt;/P&gt;&lt;P&gt;Allocated snapshot space exceeded the configured limit. (pool: A, SN: 00c0ff3cf170000017c7a15c01000000) (snapshot space used: 514491 of 514491 pages, or 100% of the snapshot space)&lt;/P&gt;&lt;P&gt;EVENT ID:#A4386&lt;/P&gt;&lt;P&gt;EVENT CODE:571&lt;/P&gt;&lt;P&gt;EVENT SEVERITY:Error&lt;/P&gt;&lt;P&gt;EVENT TIME:2019-11-27 18:21:07&lt;/P&gt;&lt;P&gt;then 2 secs later it logged messages event code 572 snapshot space below threshold.&lt;/P&gt;&lt;P&gt;Should I be worried about this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Allan&lt;/P&gt;</description>
    <pubDate>Thu, 28 Nov 2019 08:33:31 GMT</pubDate>
    <dc:creator>AllanClark</dc:creator>
    <dc:date>2019-11-28T08:33:31Z</dc:date>
    <item>
      <title>Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7070939#M13357</link>
      <description>&lt;P&gt;&amp;nbsp;Just started replication between local &amp;amp; remote site - both MSA2052s&lt;/P&gt;&lt;P&gt;The remote site will be used 80% for DR purposes.&amp;nbsp; Therefore 80% of the space on the remote MSA2052 will be used for repliction volumes.&lt;/P&gt;&lt;P&gt;The volumes are set to 4Tb at the source MSA2052. I have been running the initial sync at the source end but now received&lt;/P&gt;&lt;P&gt;Allocated snapshot space exceeded the configured limit error messages - However replication still seems to be running. At the remote site the settings are.&lt;/P&gt;&lt;P&gt;We have 1 pool with 21.5Tb in size&amp;nbsp; - Free 16.5Tb.&lt;/P&gt;&lt;P&gt;Pool overcommit is set to True&lt;/P&gt;&lt;P&gt;low threshold - 50% Medium Threshold&amp;nbsp;- 75% High Threshold - 99%&lt;/P&gt;&lt;P&gt;allocated pages - 1154195&lt;/P&gt;&lt;P&gt;snapshot pages - 576044&lt;/P&gt;&lt;P&gt;available pages - 3990718&lt;/P&gt;&lt;P&gt;Do I need to modify settings?&lt;/P&gt;&lt;P&gt;Given that I will need to enable replication on another 2 * 4Tb volumes at the source site soon.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 26 Nov 2019 08:04:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7070939#M13357</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-26T08:04:48Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071028#M13362</link>
      <description>&lt;P&gt;Hello Alan,&lt;/P&gt;&lt;P&gt;The information you have provided is not really detailed enough to determine where your problem lies. But let's start by looking at some basic numbers.&lt;/P&gt;&lt;P&gt;Your destination site (remote site) has a 21.5Tb Pool with 16.5Tb as free. If that is all replication data it indicates that 5Tb has replicated from a 4Tb volume on the source.&amp;nbsp; It may be you have more than one 4Tb volume from the source replicating which would explain this difference. Without more data from both systems and their volume usage stats it is hard to provide more information.&lt;/P&gt;&lt;P&gt;The other explination could be you have multiple snapshots you are attempting to replicate. You need to remember that for every volume you are replicating there are internal snapshots also being replicated to maintain consistency. I suggest you review the SMU Guide starting around page 117.&amp;nbsp;&amp;nbsp;&lt;A href="https://support.hpe.com/hpsc/doc/public/display?docId=a00017707en_us" target="_blank"&gt;https://support.hpe.com/hpsc/doc/public/display?docId=a00017707en_us&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Regardless, on the destination array you have the Pool configured to overcommit (&lt;SPAN&gt;Pool overcommit is set to True). Please review what having this feature provides you and understand some of the boundries of having overcommit enabled. Using the SMU Guide you can find this information starting on page 24.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Your error message might have occurred as you passed the Low Threshold. This is normal and if you review the system space use and have enough space for all the volumes + internal snapshots you should be good.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;BR /&gt;Shawn&lt;/P&gt;&lt;P&gt;I work for Hewlett Packard Enterprise. The comments in this post are my own and do not represent an official reply from HPE. No warranty or guarantees of any kind are expressed in my reply.&lt;/P&gt;</description>
      <pubDate>Tue, 26 Nov 2019 22:39:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071028#M13362</guid>
      <dc:creator>Shawn_K</dc:creator>
      <dc:date>2019-11-26T22:39:41Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071102#M13369</link>
      <description>&lt;P&gt;Shawn,&lt;/P&gt;&lt;P&gt;Thanks for your reply. This was exactly the document that I was looking for all the the other guides I had found were the advanced ones.&lt;/P&gt;&lt;P&gt;I I am reading this correctly then if we are replicating a 4Tb volume then we need at 3 * size of the primary volume i.e 12Tb of free space for volume plus internal snapshots?&amp;nbsp; Is this at both primary &amp;amp; scondary sites.&lt;/P&gt;&lt;P&gt;Th error message was for the High threshold being breached.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Allan&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Nov 2019 11:30:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071102#M13369</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-27T11:30:13Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071200#M13372</link>
      <description>&lt;P&gt;The initial replication of 4Tb LUN completed successfully yesterday.&lt;/P&gt;&lt;P&gt;I ran a scheduled subsequent replication to cathc up on all the changes.&lt;/P&gt;&lt;P&gt;After 20 mins it logged a warning message&lt;/P&gt;&lt;P&gt;Allocated snapshot space exceeded the high threshold of 99%. (pool: A, SN: 00c0ff3cf170000017c7a15c01000000) (snapshot space used: 509347 of 514491 pages, or 99% of the snapshot space)&lt;/P&gt;&lt;P&gt;EVENT ID:#A4385&lt;/P&gt;&lt;P&gt;EVENT CODE:571&lt;/P&gt;&lt;P&gt;EVENT SEVERITY:Warning&lt;/P&gt;&lt;P&gt;EVENT TIME:2019-11-27 18:21:07&lt;/P&gt;&lt;P&gt;Then seconds later&lt;/P&gt;&lt;P&gt;Allocated snapshot space exceeded the configured limit. (pool: A, SN: 00c0ff3cf170000017c7a15c01000000) (snapshot space used: 514491 of 514491 pages, or 100% of the snapshot space)&lt;/P&gt;&lt;P&gt;EVENT ID:#A4386&lt;/P&gt;&lt;P&gt;EVENT CODE:571&lt;/P&gt;&lt;P&gt;EVENT SEVERITY:Error&lt;/P&gt;&lt;P&gt;EVENT TIME:2019-11-27 18:21:07&lt;/P&gt;&lt;P&gt;then 2 secs later it logged messages event code 572 snapshot space below threshold.&lt;/P&gt;&lt;P&gt;Should I be worried about this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many Thanks&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Allan&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 08:33:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071200#M13372</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-28T08:33:31Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071216#M13373</link>
      <description>&lt;P&gt;It is the remote system that is showing these errors. From the CLI the show snapshot-space gives us.&lt;/P&gt;&lt;P&gt;login as: manage&lt;BR /&gt;Using keyboard-interactive authentication.&lt;BR /&gt;Password:&lt;/P&gt;&lt;P&gt;HPE MSA Storage MSA 2050 SAN&lt;BR /&gt;System Name: Santry-MSA2052&lt;BR /&gt;System Location: Santry&lt;BR /&gt;Version: VL270R001-01&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;# show snapshots&lt;BR /&gt;Pool Name Creation Date/Time&amp;nbsp; Status Status-Reason Parent Volume Base Vol Snaps TreeSnaps Snap-Pool Snap Data&amp;nbsp; Unique Data&amp;nbsp; Shared Data&amp;nbsp; Retention Priority&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;Success: Command completed successfully. (2019-11-28 11:15:27)&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;#&lt;BR /&gt;# show snapshot-space&lt;BR /&gt;Snapshot Space&lt;BR /&gt;--------------&lt;BR /&gt;Pool: A&lt;BR /&gt;Limit (%Pool): 10%&lt;BR /&gt;Limit Size: 2157.9GB&lt;BR /&gt;Allocated (%Pool): 0.1%&lt;BR /&gt;Allocated (%Snapshot Space): 0.9%&lt;BR /&gt;Allocated Size: 18.7GB&lt;BR /&gt;Low Threshold (%Snapshot Space): 75%&lt;BR /&gt;Middle Threshold (%Snapshot Space): 90%&lt;BR /&gt;High Threshold (%Snapshot Space): 99%&lt;BR /&gt;Limit Policy: Notify Only&lt;/P&gt;&lt;P&gt;Success: Command completed successfully. (2019-11-28 11:15:43)&lt;BR /&gt;#&lt;/P&gt;&lt;P&gt;Do we to mofiy the pool sizes? Especially as I have further 2 * 4Tb that will need to be replicated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 11:18:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071216#M13373</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-28T11:18:06Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071228#M13374</link>
      <description>&lt;P&gt;It's really difficult to answer your queries as there are many data missing in terms of both MSA systems like both Array Pool size, how many volumes, each volume size, how much space set as limit for snapshot space, etc&lt;/P&gt;&lt;P&gt;You have seen event ID 571 logged earlier when replication was going on means Copy the primary volume’s current snapshot data to the secondary volume’s current snapshot and during this time allocated snapshot space exceeded the configured percentage limit of the virtual pool. The moment replication completed which means Rollback the secondary volume to the secondary volume’s current snapshot then event ID 572 gets logged which means the indicated virtual pool has dropped below one of its snapshot space thresholds.&lt;/P&gt;&lt;P&gt;I would suggest to try the below command,&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;# show replication-snapshot-history&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;BR /&gt;Regards&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Subhajit&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;I am an HPE employee&lt;/P&gt;&lt;P&gt;If you feel this was helpful please click the &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;KUDOS!&lt;/STRONG&gt;&lt;/FONT&gt; thumb below!&lt;/P&gt;&lt;P&gt;************************************************************************&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 13:21:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071228#M13374</guid>
      <dc:creator>SUBHAJIT KHANBARMAN_1</dc:creator>
      <dc:date>2019-11-28T13:21:58Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071241#M13376</link>
      <description>&lt;P&gt;Many Thanks for your help so far&lt;/P&gt;&lt;P&gt;The output from&lt;/P&gt;&lt;P&gt;# show replication-snapshot-history&lt;BR /&gt;Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Snapshot History&amp;nbsp;&amp;nbsp; Count&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Snapshot Basename&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Retention Priority&lt;BR /&gt;------------------------------------------------------------------------------------------------------&lt;BR /&gt;repSet-vol004-dub-san&amp;nbsp; disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; never-delete&lt;BR /&gt;repSet0001&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; never-delete&lt;BR /&gt;------------------------------------------------------------------------------------------------------&lt;BR /&gt;Success: Command completed successfully. (2019-11-28 15:59:23)&lt;BR /&gt;#&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the first one is the real replication one.&lt;/P&gt;&lt;P&gt;This is the 4TB LUN which we have initially successfully replcated &amp;amp; then we have subsequently run a second scheduled replication as a ctach up.&lt;/P&gt;&lt;P&gt;We got 571 errors &amp;amp; subsequently got a few 571 event IDs in the same time frame (all logged at 18:21:07).&lt;/P&gt;&lt;P&gt;The pools at the remote site us&lt;/P&gt;&lt;P&gt;show pools&lt;BR /&gt;Name Serial Number&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Blocksize Total Size Avail&amp;nbsp; Snap Size OverCommit&amp;nbsp; Disk Groups Volumes&amp;nbsp; Low Thresh&amp;nbsp; Mid Thresh&amp;nbsp; High Thresh&amp;nbsp; Sec Fmt&lt;BR /&gt;&amp;nbsp; Health&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Reason Action&lt;BR /&gt;------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;A&amp;nbsp;&amp;nbsp;&amp;nbsp; 00c0ff3cf170000017c7a15c01000000 512&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 21.5TB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 15.8TB 18.7GB&amp;nbsp;&amp;nbsp;&amp;nbsp; Enabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 50.00 %&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 75.00 %&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99.00 %&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 512e&lt;BR /&gt;&amp;nbsp; OK&lt;BR /&gt;------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;Success: Command completed successfully. (2019-11-28 16:08:35)&lt;/P&gt;&lt;P&gt;Teh volumes that are available are.&lt;/P&gt;&lt;P&gt;&amp;nbsp;show volumes&lt;BR /&gt;Pool Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Total Size Alloc Size Type Large Virtual Extents&amp;nbsp; Health Reason Action&lt;BR /&gt;-----------------------------------------------------------------------------------------------&lt;BR /&gt;A&amp;nbsp;&amp;nbsp;&amp;nbsp; Vol0001&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99.9GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 763.3MB&amp;nbsp;&amp;nbsp;&amp;nbsp; base Disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; OK&lt;BR /&gt;A&amp;nbsp;&amp;nbsp;&amp;nbsp; sa-vol0001&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3999.9GB&amp;nbsp;&amp;nbsp; 2375.2GB&amp;nbsp;&amp;nbsp; base Disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; OK&lt;BR /&gt;A&amp;nbsp;&amp;nbsp;&amp;nbsp; santry-vol0001-dr&amp;nbsp; 99.9GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 49.5GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; base Disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; OK&lt;BR /&gt;A&amp;nbsp;&amp;nbsp;&amp;nbsp; santry-vol0004-dr&amp;nbsp; 4299.9GB&amp;nbsp;&amp;nbsp; 3271.7GB&amp;nbsp;&amp;nbsp; base Disabled&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; OK&lt;BR /&gt;-----------------------------------------------------------------------------------------------&lt;BR /&gt;Success: Command completed successfully. (2019-11-28 16:07:49)&lt;BR /&gt;#&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The volume santry-vol0004-dr is the volume at remote site that is being replicated&lt;/P&gt;&lt;P&gt;The out put from show snapshot-space is&lt;/P&gt;&lt;P&gt;how snapshot-space&lt;BR /&gt;Snapshot Space&lt;BR /&gt;--------------&lt;BR /&gt;Pool: A&lt;BR /&gt;Limit (%Pool): 10%&lt;BR /&gt;Limit Size: 2157.9GB&lt;BR /&gt;Allocated (%Pool): 0.1%&lt;BR /&gt;Allocated (%Snapshot Space): 0.9%&lt;BR /&gt;Allocated Size: 18.7GB&lt;BR /&gt;Low Threshold (%Snapshot Space): 75%&lt;BR /&gt;Middle Threshold (%Snapshot Space): 90%&lt;BR /&gt;High Threshold (%Snapshot Space): 99%&lt;BR /&gt;Limit Policy: Notify Only&lt;/P&gt;&lt;P&gt;Success: Command completed successfully. (2019-11-28 16:11:44)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 16:09:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071241#M13376</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-28T16:09:47Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071246#M13378</link>
      <description>&lt;P&gt;It will be difficult to explain everything here but let me try.&lt;/P&gt;&lt;P&gt;As per the output of command "&lt;STRONG&gt;show replication-snapshot-history&lt;/STRONG&gt;" we see that there are two replication set. Out of which&amp;nbsp;&lt;STRONG&gt;repSet-vol004-dub-san&lt;/STRONG&gt; is responsible for volume name&amp;nbsp;&lt;STRONG&gt;santry-vol0004-dr&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Here volume &lt;STRONG&gt;santry-vol0004-dr&lt;/STRONG&gt; total size &lt;STRONG&gt;4299.9GB&lt;/STRONG&gt; but allocated size&amp;nbsp;&lt;STRONG&gt;3271.7GB&lt;/STRONG&gt;&amp;nbsp;which means actual data size&amp;nbsp;&lt;STRONG&gt;3271.7GB&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Now if you see output "&lt;STRONG&gt;show snapshot-space&lt;/STRONG&gt;" there &lt;STRONG&gt;Snapshot&lt;/STRONG&gt;&amp;nbsp;&lt;STRONG&gt;Limit Size&lt;/STRONG&gt; shows&amp;nbsp;&lt;STRONG&gt;2157.9GB&lt;/STRONG&gt; but this is far less than the actual allocated size of the volume which is&amp;nbsp;&lt;STRONG&gt;3271.7GB&lt;/STRONG&gt; and that's why when you did 1st time replication that time entire &lt;STRONG&gt;3271.7GB&lt;/STRONG&gt; data got replicated which is more than what is set as &lt;STRONG&gt;snapshot limit&lt;/STRONG&gt;. This is the reason you got &lt;STRONG&gt;571 event&lt;/STRONG&gt; and the moment replication got completed this space got cleaned up as Current snapshot data rolled back to Secondary volume and &lt;STRONG&gt;572 event&lt;/STRONG&gt; got logged. You can customize this &lt;STRONG&gt;snapshot limit size&lt;/STRONG&gt; as per your requirement as well in order to avoid this type of alerts.&lt;/P&gt;&lt;P&gt;I would suggest to go through commands like &lt;STRONG&gt;show snapshot-space, set snapshot-space,&amp;nbsp;show replication-snapshot-history, show replication-sets. &lt;/STRONG&gt;These will clear your doubts.&lt;/P&gt;&lt;P&gt;&lt;A href="https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00017709en_us" target="_blank"&gt;https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00017709en_us&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;BR /&gt;Regards&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Subhajit&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;I am an HPE employee&lt;/P&gt;&lt;P&gt;If you feel this was helpful please click the &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;KUDOS!&lt;/STRONG&gt; &lt;/FONT&gt;thumb below!&lt;/P&gt;&lt;P&gt;*************************************************************************&lt;/P&gt;</description>
      <pubDate>Thu, 28 Nov 2019 17:22:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071246#M13378</guid>
      <dc:creator>SUBHAJIT KHANBARMAN_1</dc:creator>
      <dc:date>2019-11-28T17:22:44Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071298#M13380</link>
      <description>&lt;P&gt;Many thanks for this.&lt;/P&gt;&lt;P&gt;It starting to make sense now. I am guessing as I will need to start replicating approx another 5Tb over an additiional 2 * 4TB LUNs that we should adjust the limit setting to be about 40% of the Pool.&lt;/P&gt;&lt;P&gt;One&amp;nbsp;other point&amp;nbsp;at the source MSA2052&amp;nbsp;I am suprised that we don't get any of the allocated snapshot space errors - as assume the same snapshotting happening there?&lt;/P&gt;&lt;P&gt;if&amp;nbsp;I adjust the&amp;nbsp;snapshot space at the target side then do I need to adjust at source side.&lt;/P&gt;&lt;P&gt;Many thanks again&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2019 08:48:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071298#M13380</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-29T08:48:16Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071299#M13381</link>
      <description>&lt;P&gt;See Source Array initially not much snapshot space required for 1st replication because Volume data and snapshot point to same LBA from same Pool of space. That's why no extra space required for source array to keep snapshot. However you need to keep 3 times size of the volume inside pool just to be safer side as per the rule for future.&lt;/P&gt;&lt;P&gt;When 1st time replication happening that time entire data getting copied from Source to Destination array and that's why you need more snapshot space dedicated to Secondary Array to accomodate Source volume replicated data.&lt;/P&gt;&lt;P&gt;In your case 1st replication Source Volume data 3271.7GB and that's why you need more snapshot space to accomodate this data in destination array. After this replicated data gets copied it will (Current Snapshot) will roll back to secondary volume.&lt;/P&gt;&lt;P&gt;Hope it's clear now why you get 571 event for destination Array and why 572 event as well.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;BR /&gt;Regards&lt;BR /&gt;&lt;FONT color="#0000FF"&gt;&lt;STRONG&gt;Subhajit&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;I am an HPE employee&lt;/P&gt;&lt;P&gt;If you feel this was helpful please click the &lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;KUDOS!&lt;/STRONG&gt;&lt;/FONT&gt; thumb below!&lt;/P&gt;&lt;P&gt;**************************************************************************&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2019 09:02:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071299#M13381</guid>
      <dc:creator>SUBHAJIT KHANBARMAN_1</dc:creator>
      <dc:date>2019-11-29T09:02:30Z</dc:date>
    </item>
    <item>
      <title>Re: Allocated snapshot space exceeded the configured limit</title>
      <link>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071341#M13383</link>
      <description>&lt;P&gt;It is making sense now.&lt;/P&gt;&lt;P&gt;I have increase the snapshot limit setting t0 30% for the moment.&lt;/P&gt;&lt;P&gt;We have scheduled replication set to start at 18:00 - I will monitor the situation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2019 14:49:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-msa-storage/allocated-snapshot-space-exceeded-the-configured-limit/m-p/7071341#M13383</guid>
      <dc:creator>AllanClark</dc:creator>
      <dc:date>2019-11-29T14:49:19Z</dc:date>
    </item>
  </channel>
</rss>

