HPE 3PAR StoreServ Storage
1758840 Members
2992 Online
108876 Solutions
New Discussion

Query on ransomware protection via snapshots in 3PAR

 
RohanS
Frequent Visitor

Query on ransomware protection via snapshots in 3PAR

Hello,

I have some query on ransomware protection via snapshots.

* We do not have replication, however we want to understand if taking snpashot can be a viable solution for ransomware at array end [ block level ] .  or if we should be looking toward more concreate solution [ ex. Backup's / replication ]
* If snapshot is viable solution for ransomware in 3PAR and and if we use policy for a volume - "no_stale_ss", then will this use more space, if yes, how much ? Need some more clarification on the space consumption. [ in comparison with policy "stale_ss" ]
* And when we set retention policy for snapshots, how will it affect space reclamation at Volume and CPG level.
As from what I know, all snapshots need to be deleted of a volume and then freespace command need to be used to free up allocated space from volume and then 
 run compactcpg to reclaim space.
So how can we take care of space reclamation when we have snapshots with retention policy.

Regards,
Rohan.

5 REPLIES 5
sbhat09
HPE Pro

Re: Query on ransomware protection via snapshots in 3PAR

Hello @RohanS,

Please read your queries and my response in-line:

* We do not have replication, however we want to understand if taking snpashot can be a viable solution for ransomware at array end [ block level ] .  or if we should be looking toward more concreate solution [ ex. Backup's / replication ]

If Ransomeware is the concern, replication will not help. Well planned backup/snapshot schedules will serve the purpose. Backups are much prefereed. (Simple snapshots in this case would be like "My server is crashed, do you have backup? Yes, I do. But it is on the server.")

* If snapshot is viable solution for ransomware in 3PAR and and if we use policy for a volume - "no_stale_ss", then will this use more space, if yes, how much ? Need some more clarification on the space consumption. [ in comparison with policy "stale_ss" ]
A systematic snapshot shedule with sufficient retention period is a viable solution as well. Here the 'no_stale_ss' policy will not impact the space consumption. The policy will just not permit the invalid/failed snapshots. Doesn't really cause higher space consumption.

* And when we set retention policy for snapshots, how will it affect space reclamation at Volume and CPG level.
As from what I know, all snapshots need to be deleted of a volume and then freespace command need to be used to free up allocated space from volume and then 
 run compactcpg to reclaim space.

Your concern here is legitimate. It is always a trade-off where cost/efficiency on one side and capacity/performance/security on the other side. To gain a better position on security, you will have to loose a bit on space efficiency. It is all about what is most important.

So how can we take care of space reclamation when we have snapshots with retention policy.
To advise you on reclamation and retention policy, I should know on what devtype the volume is sitting and what are the other type of drives you have on the system. In general, change the copy_CPG to a CPG using low cost NL drives. Thoughtfully schedule the retention policy as per your requirement. So that you have enought snapshots covering the sufficient amount of past period, but not too many of them to consume lot of space or harming compaction efficiency.

The best practice is to set the retention period for 2-7 days, so that nobody can delete the snapshots even if they attempt during that period. And also set the auto deletion of snapshots after 5-30 days pedending on your requirement. Here, the snapshots are always secured during the retention period. And snapshots will continue to remain till auto-deletion period. But you have the option to delete snapshots post retention, prior auto-deletion. Anyway post auto deletion, snapshots will be deleted without accumulating too many of them.

Regards,
Srinivas Bhat

If you feel this was helpful please click the KUDOS! thumb below!
Note: All of my comments are my own and are not any official representation of HPE.


I am an HPE Employee

Accept or Kudo

RohanS
Frequent Visitor

Re: Query on ransomware protection via snapshots in 3PAR

Hello @sbhat09 ,

 

Thanks for your explanation.

Regarding space reclamation query, I want to understand how can we plan space reclamation when we have snapshots with retention policy.
We have 3PAR 8440 all flash array and one CPG [ SSD_R6_10D2P ] for all tpvv volumes.

So what I am trying to understand that if we have retention policy in place and snapshots are getting created and deleted accordingly, then when are we suppose to use freespace command for the base volume to release the space allocation at volume level and when can we run compactcpg.

Note : We have only 2 tpvv and 20 fpvv /cpvv, so how will space consumtion work on snapshot based on type of volume.

Below CPG & volume details for reference :

STGDCP3PAR01 cli% showcpg
----Volumes---- -Usage- ------------(MiB)-------------
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total
0 SSD_r1 - 0 0 0 0 0 0 0 0 0
1 SSD_r5 - 0 0 0 0 0 0 0 0 0
2 SSD_r6 - 0 0 0 0 0 0 0 0 0
3 SSD_R6_10D2P - 22 2 0 22 15 123760640 7680 88192 123856512
----------------------------------------------------------------------------
4 total 22 15 123760640 7680 88192 123856512


STGDCP3PAR01 cli% showcpg -sdg
-----(MiB)-----
Id Name Warn Limit Grow Args
0 SSD_r1 - - 8192 -ssz 2 -ha mag -t r1 -p -devtype SSD
1 SSD_r5 - - 8192 -t r5 -ha mag -ssz 8 -ss 64 -p -devtype SSD
2 SSD_r6 - - 8192 -ssz 8 -ha mag -t r6 -p -devtype SSD
3 SSD_R6_10D2P - - 8192 -ssz 12 -ha mag -t r6 -p -devtype SSD

STGDCP3PAR01 cli% showcpg -sag
-----(MiB)-----
Id Name Warn Limit Grow Args
0 SSD_r1 - - 8192 -p -devtype SSD -p -devtype SSD
1 SSD_r5 - - 8192 -ha mag -p -devtype SSD
2 SSD_r6 - - 8192 -p -devtype SSD -p -devtype SSD
3 SSD_R6_10D2P - - 8192 -p -devtype SSD

STGDCP3PAR01 cli% showspace -cpg \*
-------------------------(MiB)-------------------------
CPG --------EstFree--------- -----------Efficiency------------
Name RawFree LDFree OPFree Base Snp Free Total Compact Dedup Compress DataReduce Overprov
SSD_r1 96114688 48057344 - 0 0 0 0 - - - - 0.00
SSD_r5 96108544 84094976 - 0 0 0 0 - - - - 0.00
SSD_r6 96108544 72081408 - 0 0 0 0 - - - - 0.00
SSD_R6_10D2P 96068496 80057088 - 123760640 7680 88192 123856512 1.04 - - - 0.64

STGDCP3PAR01 cli% showvv -s
---------Snp---------- ---------------Usr--------------- ---------------Total----------------
--(MiB)-- -(% VSize)-- -------(MiB)------- --(% VSize)-- ---------------(MiB)---------------- ---Efficiency---
Id Name Prov Compr Dedup Type Rsvd Used Used Wrn Lim Rsvd Used Used Wrn Lim Rsvd Used HostWr VSize Compact Compress
1 .srdata full NA NA base 0 0 0.0 -- -- 81920 81920 100.0 -- -- 81920 81920 -- 81920 -- --
0 admin full NA NA base 0 0 0.0 -- -- 10240 10240 100.0 -- -- 10240 10240 -- 10240 -- --
20 Cluster1_Datastore_6T tpvv No No base 512 0 0.0 0 0 762496 759662 12.1 0 0 763008 759662 759661 6291456 8.28 --
21 Cluster1_Datastore_10T full NA NA base 0 0 0.0 -- -- 10485760 10485760 100.0 -- -- 10485760 10485760 -- 10485760 1.00 --
3 Cluster1_Datastore_12T full NA NA base 0 0 0.0 -- -- 12582912 12582912 100.0 -- -- 12582912 12582912 -- 12582912 1.00 --
17 Cluster1_Datastore_13T cpvv NA NA base 512 0 0.0 0 0 13631488 13631488 100.0 -- -- 13632000 13631488 -- 13631488 1.00 --
22 Cluster2_Datastore_11T full NA NA base 0 0 0.0 -- -- 11534336 11534336 100.0 -- -- 11534336 11534336 -- 11534336 1.00 --
4 Cluster2_Datastore_13T full NA NA base 0 0 0.0 -- -- 13631488 13631488 100.0 -- -- 13631488 13631488 -- 13631488 1.00 --
18 Cluster2_Datastore_15T cpvv NA NA base 512 0 0.0 0 0 15728640 15728640 100.0 -- -- 15729152 15728640 -- 15728640 1.00 --
7 Common_3PARDatastore full NA NA base 0 0 0.0 -- -- 4300800 4300800 100.0 -- -- 4300800 4300800 -- 4300800 1.00 --
29 Commvault_BKP_01 cpvv NA NA base 512 0 0.0 0 0 512000 512000 100.0 -- -- 512512 512000 -- 512000 1.00 --
30 Commvault_BKP_02 cpvv NA NA base 512 0 0.0 0 0 1048576 1048576 100.0 -- -- 1049088 1048576 -- 1048576 1.00 --
28 Commvault_BKP_L01 cpvv NA NA base 512 0 0.0 0 0 512000 512000 100.0 -- -- 512512 512000 -- 512000 1.00 --
13 ct-herofincor-01_500GB cpvv NA NA base 512 0 0.0 0 0 512000 512000 100.0 -- -- 512512 512000 -- 512000 1.00 --
27 SQLClusterDataDiskPROD cpvv NA NA base 512 0 0.0 0 0 512000 512000 100.0 -- -- 512512 512000 -- 512000 1.00 --
25 SQLClusterDataDiskUAT cpvv NA NA base 512 0 0.0 0 0 512000 512000 100.0 -- -- 512512 512000 -- 512000 1.00 --
26 SQLClusterQRMDiskPROD tpvv No No base 512 0 0.0 0 0 1024 17 0.3 0 0 1536 17 17 5120 >25 --
24 SQLClusterQRMDiskUAT cpvv NA NA base 512 0 0.0 0 0 10240 10240 100.0 -- -- 10752 10240 -- 10240 1.00 --
14 Syndcpvdatastore-1.5TB cpvv NA NA base 512 0 0.0 0 0 1572864 1572864 100.0 -- -- 1573376 1572864 -- 1572864 1.00 --
33 Syndcpvdatastore-cls02-4 cpvv NA NA base 512 0 0.0 0 0 5242880 5242880 100.0 -- -- 5243392 5242880 -- 5242880 1.00 --
34 Syndcpvdatastore-cls02-5_08T cpvv NA NA base 512 0 0.0 0 0 10485760 10485760 100.0 -- -- 10486272 10485760 -- 10485760 1.00 --
35 Syndcpvdatastore-cls02-6_08T cpvv NA NA base 512 0 0.0 0 0 8388608 8388608 100.0 -- -- 8389120 8388608 -- 8388608 1.00 --
6 syndcpvwbl08_Datastore full NA NA base 0 0 0.0 -- -- 6291456 6291456 100.0 -- -- 6291456 6291456 -- 6291456 1.00 --
23 syndcpvwbl08_Datastore_5T full NA NA base 0 0 0.0 -- -- 5242880 5242880 100.0 -- -- 5242880 5242880 -- 5242880 1.00 --
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
24 total 7680 0 123594368 123590527 123602048 123590527 759678 129127424

Regards,
Rohan.


RohanS
Frequent Visitor

Re: Query on ransomware protection via snapshots in 3PAR

Hello @sbhat09 

Thanks for your feedback.

Regarding space reclamation [ in reference to snapshots ], I need some more information if possible.

> We have HPE 3PAR 8440, all flash array.
> All volumes are created using one CPG.
> 20 volumes are fully provisioned and 2 volumes are thinly provisioned.
> So need to understand how will snapshot space consuption work on full and thin vv's with retention policy for a week.
> How can we plan reclamation for these volume and how will it work in background for fpvv and tpvv snapshots.
- I mean if we are planning to use spansots for tpvv volume when retention in place, then when can we plan to run "freespace" command on volume and when to plan for compactcpg.
- And if we create snapshots on fpvv / cpvv with retention policy, then how will space relcamation work.

Below details from the array for reference :

STGDCP3PAR01 cli% showcpg
----Volumes---- -Usage- ------------(MiB)-------------
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total
0 SSD_r1 - 0 0 0 0 0 0 0 0 0
1 SSD_r5 - 0 0 0 0 0 0 0 0 0
2 SSD_r6 - 0 0 0 0 0 0 0 0 0
3 SSD_R6_10D2P - 22 2 0 22 15 123760640 7680 88192 123856512
----------------------------------------------------------------------------
4 total 22 15 123760640 7680 88192 123856512

STGDCP3PAR01 cli% showspace -cpg \*
-------------------------(MiB)-------------------------
CPG --------EstFree--------- -----------Efficiency------------
Name RawFree LDFree OPFree Base Snp Free Total Compact Dedup Compress DataReduce Overprov
SSD_r1 96114688 48057344 - 0 0 0 0 - - - - 0.00
SSD_r5 96108544 84094976 - 0 0 0 0 - - - - 0.00
SSD_r6 96108544 72081408 - 0 0 0 0 - - - - 0.00
SSD_R6_10D2P 96068496 80057088 - 123760640 7680 88192 123856512 1.04 - - - 0.64


Regards,
Rohan.

 

RohanS
Frequent Visitor

Re: Query on ransomware protection via snapshots in 3PAR

Hello @sbhat09 

Thanks for your feedback.

Regarding space reclamation [ in reference to snapshots ], I need some more information if possible.

> We have HPE 3PAR 8440, all flash array.
> All volumes are created using one CPG.
> 20 volumes are fully provisioned and 2 volumes are thinly provisioned.
> So need to understand how will snapshot space consuption work on full and thin vv's.
> How can we plan reclamation for these volume and how will it work in background for fpvv and tpvv snapshots.
- I mean if we are planning to use spansots for tpvv volume when retention in place, then when can we plan to run "freespace" command on volume and when to plan for compactcpg.
- And if we create snapshots on fpvv / cpvv with retention policy, then how will space relcamation work.

Below details from the array for reference :

STGDCP3PAR01 cli% showcpg
----Volumes---- -Usage- ------------(MiB)-------------
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total
0 SSD_r1 - 0 0 0 0 0 0 0 0 0
1 SSD_r5 - 0 0 0 0 0 0 0 0 0
2 SSD_r6 - 0 0 0 0 0 0 0 0 0
3 SSD_R6_10D2P - 22 2 0 22 15 123760640 7680 88192 123856512
----------------------------------------------------------------------------
4 total 22 15 123760640 7680 88192 123856512

STGDCP3PAR01 cli% showspace -cpg \*
-------------------------(MiB)-------------------------
CPG --------EstFree--------- -----------Efficiency------------
Name RawFree LDFree OPFree Base Snp Free Total Compact Dedup Compress DataReduce Overprov
SSD_r1 96114688 48057344 - 0 0 0 0 - - - - 0.00
SSD_r5 96108544 84094976 - 0 0 0 0 - - - - 0.00
SSD_r6 96108544 72081408 - 0 0 0 0 - - - - 0.00
SSD_R6_10D2P 96068496 80057088 - 123760640 7680 88192 123856512 1.04 - - - 0.64


Regards,
Rohan.

Mahesh202
HPE Pro

Re: Query on ransomware protection via snapshots in 3PAR

 
 

Hi Rohan

When it comes to space reclamation for snapshots on HPE 3PAR 8440, the process differs depending on whether the volume is fully provisioned (FPVV) or thinly provisioned (TPVV).

For FPVV volumes, when a snapshot is created with a retention policy, the space consumed by the snapshot is not immediately reclaimed when the snapshot is deleted. Instead, the space is freed up gradually over time as the original data blocks are no longer needed by any remaining snapshots or host writes. This process is known as the "background copy-on-write" mechanism, and it allows space to be reclaimed automatically without requiring any manual intervention.

For TPVV volumes, the process is slightly different. When a snapshot is created, the space consumed by the snapshot is initially reserved, but it is not actually allocated until a host write occurs in that space. When the snapshot is deleted, the space is freed up immediately, and the reserved space is returned to the free pool. However, the freed-up space is not automatically reclaimed by the system. To reclaim this space, you can use the "freespace" command to instruct the system to scan the volume for unallocated space and return it to the free pool. Additionally, you can use the "compactcpg" command to optimize the space utilization of the CPG as a whole.

In your case, as you have two TPVV volumes and 20 FPVV volumes all using the same CPG, you may want to plan for space reclamation accordingly. For the TPVV volumes, you can run the "freespace" command after deleting snapshots to reclaim the space immediately. For the FPVV volumes, you do not need to run any commands, as the space will be reclaimed gradually over time. However, if you want to optimize the space utilization of the CPG as a whole, you can run the "compactcpg" command periodically to ensure that the free space is being utilized efficiently.

hope this helps.!!

Regards
Mahesh.

If you feel this was helpful please click the KUDOS! thumb below!

I work for HPE.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

Accept or Kudo