Around the Storage Block
1753331 Members
5355 Online
108792 Solutions
New Article
csnell

How to: Persistent Volumes with the HPE 3PAR Volume Plug-in for Docker - Snapshots/Clones: Part 2

How to: Persistent Volumes using the HPE 3PAR Volume Plugin for Docker

A detailed guide – Part 2 – Snapshots and Clones

This is a continuation of my previous post on volume provisioning. Start there if this is your first time using the HPE 3PAR Volume Plug-in for Docker because it covers all of the options on how to create a volume within Docker and using a StorageClass within Kubernetes.

In this post, we will be covering examples on how to create snapshots and clones. This is pretty straightforward with a Docker command so I won’t go into much depth there but will focus primarily on creating snapshots/clones within Kubernetes. You can find all of the supported Docker commands on the official Github page.

https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/usage.md

Let’s jump right into this. One of the most important parts of DevOps is the ability to work with the most current production data as possible, so being able to take snapshots of production application data is critical. Another important part is the ability to take that snapshot from within Docker and Kubernetes which can then be integrated into the continuous integration and deployment (CI/CD) pipelines.

Volume management (snapshots & clones)

We can use the HPE 3PAR Volume Plug-in to clone or take snapshots of persistent volumes in Docker and Kubernetes. This process copies the underlying 3PAR volumes which can then be used in additional workflows. The process to create snapshots or clones with the HPE 3PAR Volume Plug-in for Docker is identical except for the commands used:

  • virtualCopyOf—Takes a snapshot of a persistent volume (Docker or Kubernetes)
  • cloneOf—Creates a clone (full copy) of a persistent volume (Docker or Kubernetes)

Docker

Let’s look at a Docker example. In this scenario, we have a Mongo Database running that has a persistent volume. We need to clone/snapshot this volume. Then we will take the new volume and mount it to a development instance to run tests against.

First let’s get a list of the Docker volumes that are available.

 

 

 

$ docker volume ls
DRIVER              VOLUME NAME
local               c10216b50bd8650b22da7d89ee19953a8706afca356e871bff3e3803898b560c
hpe                 mongo-prod1

 

 

 

We will make a copy of the parent volume using the virtualCopyOf (snapshot) or cloneOf (clone) command.

 

 

 

$ docker volume create -d hpe --name snap_mongo_prod1 -o virtualCopyOf=mongo-prod1

 

 

 

or

 

 

 

$ docker volume create -d hpe --name snap_mongo_prod1 -o cloneOf=mongo-prod1

 

 

 

If you look at this command, you will see that we aren’t specifying the 3PAR volume name here. This is the Docker volume name mongo-prod1. The 3PAR Volume Plug-in will take the Docker volume name and use it to map to the underlying 3PAR volume to create a 3PAR clone/snapshot and present it back to Docker for use.

You can see this if you inspect the volume for the parent_volume parameter.

 

 

 

$ docker volume inspect snap_mongo_prod1
[
    {
        "Driver": "hpe",
        "Labels": {},
        "Mountpoint": "/",
        "Name": "snap_mongo_prod1",
        "Options": {
            "virtualCopyOf": "mongo-prod1"
        },
        "Scope": "global",
        "Status": {
            "snap_detail": {
                "3par_vol_name": "dcs-I11QgFZyRtGD6ntSKRZExg",
                "backend": "DEFAULT",
                "compression": null,
                "expiration_hours": null,
                "fsMode": null,
                "fsOwner": null,
                "is_snap": true,
                "mountConflictDelay": 30,
                "parent_id": "a162a318-aaf4-4f85-88ae-7e4cdb0c63ba",
                "parent_volume": "mongo-prod1",
                "provisioning": "thin",
                "retention_hours": null,
                "size": 100,
                "snap_cpg": "FC_r6"
            }
        }
    }
]

 

 

 

After the clone/snapshot has been created, you can mount and use it just like any other persistent volume.

Snapshot scheduling

Another important feature of snapshots, especially within CI/CD or automation scenarios, is lifecycle management. When you create a snapshot, you have the option to specify the retention and expiration time.

Snapshot optional parameters are:

  • expirationHours—Specifies the expiration time for a snapshot in hours. The snapshot will be deleted automatically from the 3PAR array after the time defined in expirationHours
  • retentionHours—Specifies the retention time for a snapshot in hours. The snapshot cannot be deleted from the 3PAR array until the number of hours defined in retentionHours has expired.

 

 

 

$ docker volume create -d hpe --name <snapshot_name> -o virtualCopyOf=<source_vol_name> -o expirationHours=3

 

 

 

For more information on snapshot schedules go here:

https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/create_snapshot_schedule.md

Kubernetes

Now let’s take this same example and look at how to use snapshots within Kubernetes objects like StorageClass and PVCs. There are a couple of ways you can create clones/snapshots and the use case will depend on your needs.

StorageClass method (snapshot & clone)—Multiple

You can use a snapshot or clone within a StorageClass to create a snapshot or copy of the Kubernetes Persistent Volume for each PVC based on the clone/snapshot StorageClass. You can use this method in development/testing scenarios when you need to spin up multiple instances of production data without impacting the parent volume. As you create new PVCs by using the clone/snapshot StorageClass, a new PV is created based on the parent volume.

Like using snapshots within Docker, in order to make a clone/snapshot within a StorageClass or PVC, you must use an existing Persistent Volume (PV) in Kubernetes.

First, we need to find the PV name of the production MongoDB instance so we can use it within our StorageClass or PVC.

 

 

 

$ kubectl get pv

 

 

 

In our example, this command returns the PV name: sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c

 

 

 

NAME                                              CAPACITY   ACCESS MODES   
sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c   50Gi       RWO

 

 

 

Now that you have the name of the volume for the production MongoDB, let’s create the clone/snapshot StorageClass object. Like you did with Docker, use the virtualCopyOf(snapshot) or cloneOf (clone) command and specify the PV name.

 

 

 

$ vi sc_snap_mongodb.yml

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-snap-mongo
provisioner: hpe.com/hpe
parameters:
   virtualCopyOf: "sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c"

 

 

 

Use the kubectl create command to import the StorageClass definition into the Kubernetes cluster.

 

 

 

$ kubectl create –f sc_snap_mongodb.yml

 

 

 

Next let’s create a PVC that uses the snapshot based StorageClass. You create the PVC for a snapshot/clone the same way as any other PVC but it is important to note some things. A typical PVC requires a request for a certain amount of storage from a StorageClass. However, because we are using a StorageClass that will create a snapshot/clone, the newly created volume will inherit the volume size of the parent volume. Due to this, when we create a clone/snapshot, the plug-in will ignore the storage size request (even though it is required) in the PVC despite what value you specify.

 

 

 

$ vi pvc_snap_mongodb.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-snap-mongo
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: sc-snap-mongo

 

 

 

Use the kubectl create command to import the PVC definition into the Kubernetes cluster.

 

 

 

$ kubectl create –f pvc_snap_mongodb.yml

 

 

 

You can then inspect the PVC and PV to verify that it was created successfully.

 

 

 

$ kubectl describe pvc pvc-snap-mongo
Name:          pvc-snap-mongo
Namespace:     default
StorageClass:  sc-snap-mongo
Status:        Bound
Volume:        sc-snap-mongo-05be459e-c5b2-11e9-b500-0050569bb07c
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-provisioner=hpe.com/hpe
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWO
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  ExternalProvisioning  11s (x2 over 11s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "hpe.com/hpe" or manually created by system administrator

 

 

 

 

 

 

 

$ kubectl describe pv sc-snap-mongo-05be459e-c5b2-11e9-b500-0050569bb07c
Name:            sc-snap-mongo-05be459e-c5b2-11e9-b500-0050569bb07c
Labels:          <none>
Annotations:     hpe.com/docker-volume-name=sc-snap-mongo-05be459e-c5b2-11e9-b500-0050569bb07c
                 pv.kubernetes.io/provisioned-by=hpe.com/hpe
                 volume.beta.kubernetes.io/storage-class=sc-snap-mongo
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    sc-snap-mongo
Status:          Terminating (lasts 1m)
Claim:           default/pvc-snap-mongo
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:       FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
    Driver:     hpe.com/hpe
    FSType:
    SecretRef:  <nil>
    ReadOnly:   false
    Options:    map[name:sc-snap-mongo-05be459e-c5b2-11e9-b500-0050569bb07c virtualCopyOf:sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c]
Events:         <none>

 

 

 

As you can see, creating a PVC from a snapshot based StorageClass is similar to creating any other PVC. Now that you have the snapshot based StorageClass available within Kubernetes, it can be used for any of your testing or development needs.

Before we move into creating one-off clones/snapshots, I want to make sure I cover setting up a snapshot schedule within a StorageClass.

Here is what a StorageClass will look like with a snapshot and its associated schedule:

 

 

 

$ vi sc_snapshot_schedule.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-snapshot-schedule
provisioner: hpe.com/hpe
parameters:
  virtualCopyOf: "sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c"
  scheduleFrequency: "10 2 * * *"
  scheduleName: "dailyOnceSchedule"
  snapshotPrefix: "mongo-daily"
  expHrs: "5"
  retHrs: "3"

 

 

 

 

Notes:

  1. These commands create a snapshot from the parent volume
    sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c.
  2. This creates a snapshot schedule on the HPE 3PAR array named: dailyOnceSchedule.
  3. scheduleFrequency specifies that a snapshot will be created daily at 2:10
  4. The name of the snapshot created with this schedule will have the prefix 'mongo-daily'. These snapshots will have a retention period of three hours and expiration period of five hours.

After you have created your schedule, use the kubectl create command to create the StorageClass.

Persistent Volume method (snapshot & clone)—Single clone/snapshot

In this scenario, we will cover how you can create a one-off clone/snapshot of a PersistentVolume (PV) where you don’t need to have multiple copies of the data available within the Kubernetes cluster.

You can do this by manually creating a PersistentVolume and specifying the name of the PV within the virtualCopyOf or cloneOf option, similar to the way you did within a StorageClass.

 

 

 

$ vi snapshot_pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
spec:
    capacity:
      storage: 20Gi
    accessModes:
    - ReadWriteOnce
    flexVolume:
      driver: hpe.com/hpe
      options:
        name: pv-1
        virtualCopyOf: "sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c"
    storageClassName: manual

 

 

 

Use the kubectl create command to create the PV.

 

 

 

$ kubectl create –f snapshot_pv.yml

 

 

 

Let’s look at the PV to see what it looks like.

 

 

 

$ kubectl describe pv pv-1

Name:            pv-1
Labels:          <none>
Annotations:     <none>
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Available
Claim:
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:       FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
    Driver:     hpe.com/hpe
    FSType:
    SecretRef:  <nil>
    ReadOnly:   false
    Options:    map[virtualCopyOf:sc-mongodb-633be241-c5a5-11e9-b500-0050569bb07c name:pv-1]
Events:         <none>

 

 

 

 Now that it is available, you can create a PVC for the PV to bind to so that it can be used within your application.

 

 

 

$ vi pvc_pv1.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-pv1
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

 

 

 

Use the kubectl create command to create the PVC.

 

 

 

$ kubectl create –f snapshot_pv.yml

 

 

 

You can see that the manually created snapshot/clone is now bound to that PVC and is ready to be mounted to any application or pod.

 

 

 

$ kubectl describe pvc pvc-pv1
Name:          pvc-pv1
Namespace:     default
StorageClass:  manual
Status:        Bound
Volume:        pv-1
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWO
Events:        <none>

 

 

 

This is a perfect example of my previous statement where in the PVC we requested 3 GiB of storage. However, because we created a snapshot/clone, the new volume (pv-1) inherits the size of the parent volume of 20 GiB.

Summary

We have covered a lot over these couple of posts from volume provisioning to snapshots and cloning volumes. With this information you should be able to quickly master the many options that are available from the HPE 3PAR Volume Plug-in for Docker for today’s demanding containerized workloads.

Happy coding!

 

0 Kudos
About the Author

csnell

Everything about Containers and Persistent Storage using HPE 3PAR/Primera in Docker, Kubernetes, Red Hat, as well as Storage Automation