Around the Storage Block
1822605 Members
3681 Online
109643 Solutions
New Article ๎ฅ‚
StorageExperts

Deep dive: Zerto Sync Types and Long-Term Retention with Tiering to Cloud Storage

Take a deep dive to learn more about the main components of the Zerto solution plus details on Sync Types, Long-Term Repositories, and Cloud Tiering.

As you may already know, we have a new member in the HPE family. Welcome ZertoHPE-Zerto-blog.png 

Zertoโ€™s software platform delivers disaster recovery (DR), backup, and data mobility across on-premises and multicloud environments. This is true IT resilience.

About Zerto

Founded in 2009 and with more than 9500 customers, Zerto is an industry leader in the growing data replication and protection market with its journal-based continuous data protection (CDP) technology that delivers fastest recovery point objectives (RPOs), and recovery time objectives (RTOs).

In its foundation, Zerto has continuous data protection (CDP) combined with a powerful journal engine that allows you to not only recover to the latest point in time but also offers you a granularity of seconds. The outcome of this is the ability to safely rewind to any point in the past. Its' a time machine. (Any resemblance to the Back to the Future movie is coincidental).

Now letโ€™s dive deeper into the Zerto platform

In this blog, Iโ€™ll; talk in more detail about two important aspects of the platform: Sync Types and Long-Term Retention and Tiering.

First let's keep in mind some important important elements of Zerto: VPG, VRA, Journal and Long-Term Retention (LTR) Repositories.

VPG

Zerto protection, mobility, and backup are all configured through the creation of Virtual Protection Groups (VPGs). Most applications have multiple VM dependencies. Using the traditional method of protecting VMs individually results in significant challenges to recovering your complete application quickly. You might be able to recover individual VMs quickly, but they will all be recovered to different points in time and it becomes challenging to get them all consistent so that the application is in a usable state.

Zerto VPGs have an important difference by allowing you to protect one or more VMs together in a consistent fashion, ensuring every point in time that is inserted into the Zerto journal is from the same point in time for all VMs within the VPG. This allows consistent recovery of an entire application, and all its VM dependencies, to a consistent point in time.

VRA

A Virtual Replication Appliance (VRA) is a virtual purpose-built appliance installed on each hypervisor host where VMs are to be protected from or to.  The VRA manages the replication of data from protected virtual machines to its local and/or remote target where it stores the data in the journal. This same scale-out appliance handles copying data from the journal to a long-term retention repository.

Journal

Zertoโ€™s continuous data protection (CDP) stores all replicated data in the journal. The journal stores all changes for a user-defined period, up to 30 days, and allows you to recover to any point in time within the journal, ensuring your recovery point objective (RPO) is always as low as possible. Every write to a protected virtual machine is copied by Zerto. These writes are replicated locally and/or remotely and written to a journal managed by a Virtual Replication Appliance and each protected virtual machine has its own journal. Checkpoints are used to ensure write order fidelity and crash-consistency. Recovery can be performed to the last checkpoint or a user-selected checkpoint.

This enables recovering files, VMs, applications, or entire sites, either to the previous crash-consistent point-in-time or, for example, when the virtual machine is attacked by a virus or ransomware, to a point-in-time before the attack.

HPE Zerto 1.png

Long-Term Retention (LTR) Repositories

In addition to flexible options for short-term recovery scenarios using the journal, most organizations that have compliance requirements need long-term retention as an integral part of their data protection platform. Compliance standards often require you to keep and recover data for longer than 30 days. Zerto supports the use of disk, object, and cloud storage. You can see a full list of supported repositories and their versions on Interoperability Matrix

Types of Sync

Zerto has four basic types of sync: Initial, Continuous, Bitmap, and Delta.

Initial Sync

After the creation of the VPG, the Initial Sync process starts where a replica disk for each Source Site volume is created in the DR Site. The existing data from each VM is copied by the VRA (Virtual Replication Appliance) from the Source Site to the VRA at the Target Site. The Target Site VRA then writes the information to Journal and then commits to the replica disk. While Initial Sync is taking place, Zerto sends real-time changes to the Target Site. It doesn't have to wait for Initial Sync to complete.  All data residing on the source site protected VMs is copied to the recovery site in order to create the recovery volume the VPG will be based upon.

HPE Zerto 2.png

Since this sync copies all data from site to site it's expected to be the slowest of all types of syncs. Time may vary depending on sites bandwidth and total size to be migrated and keep it mind that during initial sync checkpoints are not generated and no DR activity is available.

Continuous Sync

Probably the most common type of sync in everyday operation. New writes on the Source Disk are tracked by Zerto while the VMs on the Source Site write on the Source Disk. At this moment the VRA on the Source Site asynchronously captures this same block change and then the Source Disk sends an ACK to the Source VM. This ACK is captured from the VRA in the Source Site and Zerto streams the changes as they are ACKโ€™d. The Source VRA compresses these writings and sends it to the Target VRA which receives it and writes it in the Journal generating checkpoints. When the Journal fills, the next checkpoint created causes the oldest checkpoint to be flushed to Replica Disk.

HPE Zerto 3.png

Bitmap Sync

Bitmap sync occurs when there are insufficient resources to maintain replication of the I/O load of the protected application or in case of communication failure between the sites. There are several factors that can contribute to a lack of resources (network bottleneck, WAN failures, disconnections, VRA Buffers and etc).  Note - The VRA buffer could be set via the โ€œAmount of VRA RAMโ€ value, specified when the VRA is installed.

In this kind of situation Zerto starts to maintain a smart bitmap in host memory,  tracking and recordind the storage areas that changed. The bitmap is kept in memory and Zerto does not require any LUN or volume per VPG at the protected side. The bitmap is small and scales dynamically, but does not contain any actual IO data, it only contains just references to the areas of the protected disk that have changed. The bitmap is stored locally on in ESX kernel memory with available resourses, it handled by the zdriver.

HPE Zerto 4.png

When communication is re-established, the Source VRA sends updates to the Target VRA, these changes are then written to Journal and committed to the Replica Disk. When this process is complete, Zerto will resume with Continuous Sync.

HPE Zerto 5.png

In terms of transfer time, since no overall check is being made and only the relevant data is being transferred between sites, this is expected to be the fastest sync. Time may vary depending on the amount of data generated while the disconnection occurred and site bandwidth.

During Bitmap Sync DR activity is available. However, checkpoints are not generated to journal. If a disaster occurs requiring a failover during a bitmap synchronization, the VPG status changes to โ€œRecovery Possibleโ€ and you can recover to the last checkpoint written to the journal

Delta Sync

Delta Sync occurs when a long interruption in replication occurs. When communication is re-established the Delta Sync process actually starts and Zerto performs a checksum analysis so only new data to the protected disks is migrated. In Delta Sync Zerto compares the source and target VMDK volumes by MD5 checksums to ensure there are no inconsistencies at the MD5 level of the disk.

CPU might be impacted at both source and Target sites, as Zerto knows what has changed and overwrites the blocks on the Target Site. Delta Sync only occurs on the replica volume on the replica disk and not on Journal.

HPE Zerto 6.png

During the synchronization, new checkpoints are not added to the journal, but recovery operations are still possible. If a disaster occurs requiring a failover during a delta synchronization, you can recover to the last valid checkpoint written to the journal.

Long-Term Retention and Tiering in the Cloud

When Long-term Retention runs for a VPG for the first time, all the data is read and written to the Repository (Full Retention Set) and each subsequent incremental Retention process incorporates the changes since the previous run, to provide a complete view of the data at that point in time. The incremental copies rely on the existing full Retention set for that VPG. (Note: Data is compressed and transmitted on a user-defined schedule.)

You can preserve the Retention settings as unchangeable by keeping them as immutable, so they cannot be modified after creation. This protects from ransomware trying to delete or change the Long-term data residing within the Repository.

For now, the Immutability feature is available for the following Repository types:

  • Amazon S3

The Immutability feature is based on two capabilities of the cloud storage service:

  • S3 Bucket versioning (storing multiple variants of an object).
  • S3 Object lock (storing objects using a write-once-read-many (WORM) model), using Compliance Retention mode.

Tiering

The tiering process is a-sync task initiated periodically and is set at the Repository level and will apply for all future Retention sets created on this Repository. The user can select to tier the data after a specified period to an online tier and/or an offline tier. Only Full Retention sets without dependent incremental Retention sets can be tiered and retention sets will expire based on the Retention policy as any other Retention set.

HPE Zerto 7.png

Once more time: Welcome Zerto!

Zerto is a powerful, simple-to-run platform that embodies the term IT resilience. We are happy to have Zerto as part of our HPE Storage family.


Meet HPE Storage Experts blogger Deivis Augusto Carobin

Delvis Augusto Carobin-HPE Storage.pngDeivis is an enthusiastic HPE Storage Ambassador and HPE Storage Solutions Architect & HPE Master ASE โ€“ an Advanced Server Solutions Architect, ZCP: Enterprise Engineer and HPE Nimble Storage Instructor. For over twenty years he has been developing projects involving high availability and performance solutions featuring a variety of HPE Storage technologies. He currently lives and works in the southern region of Brazil.


Storage Experts
Hewlett Packard Enterprise

twitter.com/HPE_Storage
linkedin.com/showcase/hpestorage/
hpe.com/storage

 

 

0 Kudos
About the Author

StorageExperts

Our team of Hewlett Packard Enterprise storage experts helps you dive deep into relevant data storage and data protection topics.