HPE Nimble Storage Tech Blog
cancel
Showing results for 
Search instead for 
Did you mean: 

Dory: A FlexVolume Driver that speaks Whale!

mmattsson

I recently covered our plans to release a HPE Nimble Storage FlexVolume driver for the Kubernetes Persistent Storage FlexVolume plugin. While we’re well on track to release this through InfoSight shortly, a chain events have led us to open source the entire FlexVolume driver! Let’s explore that a little bit further and understand what it means - this is exciting!

Our FlexVolume driver is merely a translation layer. It re-writes FlexVolume API calls to Docker Volume API calls. We had the intentions to hardwire the driver to only look for Nimble Storage plugin sockets. Our friends over at 3PAR got wind of what we were working on and they too wanted to get on the Kubernetes bandwagon. It made sense. After the fact, lifting out the hardwired code into a JSON configuration file made it very easy to point the translator to any socket that speaks the Docker Volume API.

Let me introduce Dory, she’s the FlexVolume driver that speaks whale! This means that any legacy volume plugin (it also works with managed plugins but Kubernetes does not generally recommend running anything but Docker 1.12.x) that works with Docker, you may now use with Kubernetes 1.5, 1.6, 1.7 and their OpenShift counterparts to provision Persistent Storage for your Kubernetes pods. Dory is Open Source released under the Apache 2.0 license and available on GitHub.

If you have a solution today with a Docker Volume plugin, you may use that plugin with Dory to provide Persistent Storage for Kubernetes.

Building and Enabling Dory

Let’s assume we have Kubernetes installed and a vendor’s Docker Volume plugin installed, configured and working. These are the simple steps to build, install and use Dory.

Building Dory requires Go and make installed on your host OS, please follow your Linux distribution’s instructions on how to install those tools before proceeding. There’s also more detailed building instructions here.

Note: Substitute any reference to ‘myvendor’ to the actual Docker Volume plugin you want to use. If you’re using Nimble Storage, ‘nimble’ is the correct string to use.

$ git clone https://github.com/hpe-storage/dory
$ cd dory
$ make gettools
$ make dory
$ sudo mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor
$ sudo cp src/nimblestorage/cmd/dory/dory.json /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor.json
$ sudo cp bin/dory /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

We now have the driver installed. Let’s look at the basic configuration:

# /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dory~myvendor/myvendor.json
{
    # Where to log API calls
    "logFilePath": "/var/log/myvendor.log”,

    # Be very verbose about the API calls?
    "logDebug": false,

    # Does the underlying driver understand kubnernetes.io/<string> calls?
    "stripK8sFromOptions": true,

    # This is where our plugin API socket resides
    "dockerVolumePluginSocketPath": "/run/docker/plugins/myvendor.sock”,
 
    # Does the Docker Volume plugin support creation of volumes?
    "createVolumes": true
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Now, configured and ready to provision and mount volumes, we need to restart the kubelet node service.

If running Kubernetes:
$ sudo systemctl restart kubelet‍‍‍‍‍‍‍‍‍‍‍‍‍‍

If running OpenShift:
$ sudo systemctl restart atomic-openshift-node‍‍‍‍‍‍‍‍‍‍‍‍‍‍

If everything checks out, you should be able to inspect your log file for successful initialization:
Info : 2017/09/18 16:37:40 dory.go:52: [127775] entry  : Driver=myvendor Version=1.0.0-ae48ca4c Socket=/run/docker/plugins/myvendor.sock Overridden=true
Info : 2017/09/18 16:37:40 dory.go:55: [127775] request: init []
Info : 2017/09/18 16:37:40 dory.go:58: [127775] reply  : init []: {"status":"Success"}‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Hello World from Dory

Now, let’s create some resources on our Kubernetes cluster. First, we need a Persistent Volume:
$ kubectl create -f - << EOF
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv100
spec:
capacity:
   storage: 20Gi          # This is the capacity we’ll claim against
accessModes:
   - ReadWriteOnce
flexVolume:
   driver: dory/myvendor  # This is essentially dory~myvendor created above
   options:               # All options are vendor dependent
     name: mydockervol100 # This is actual docker volume name
     size: "20"           # This is also vendor dependent!
EOF‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

If you’re paying attention, no actual volume is created in this step. The FlexVolume plugin is very basic and we’ll call the Docker Volume Create API in the FlexVolume mount phase.

Now, let’s create a claim against the above volume:
$ kubectl create -f - << EOF
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc100
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
EOF‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

For a very basic application that requires some persistent storage and is easy to demo:
$ kubectl create -f - <<EOF
---
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: minio
    image: minio/minio:latest
    args:
    - server
    - /export
    env:
    - name: MINIO_ACCESS_KEY
      value: minio
    - name: MINIO_SECRET_KEY
      value: doryspeakswhale
    ports:
    - containerPort: 9000
    volumeMounts:
    - name: export
      mountPath: /export
  volumes:
    - name: export
      persistentVolumeClaim:
        claimName: pvc100
EOF‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

When the pod gets created and a mount request comes in, you should see the actual volume created:
$ docker volume ls
DRIVER              VOLUME NAME
nimble              mydockervol100‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

On the Kubernetes side it should now look something like this:
$ kubectl get pv,pvc,pod -o wide
NAME       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM            STORAGECLASS   REASON    AGE
pv/pv100   20Gi       RWO           Retain          Bound     default/pvc100                            11m

NAME         STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc/pvc100   Bound     pv100     20Gi       RWO                          11m

NAME                          READY     STATUS    RESTARTS   AGE       IP             NODE
po/mypod                      1/1       Running   0          11m       10.128.1.53    tme-lnx1-rhel7-stage.lab.nimblestorage.com‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Within the cluster, you should see something like this on http://10.128.1.53:9000

Summary

What we can witness here is just the tip of the iceberg. We’re currently building a Kubernetes out-of-tree StorageClass provisioner to accompany the FlexVolume driver to truly allow dynamic provisioning. In a production scenario, users are not allowed to create Persistent Volume resources. Overlaying the Persistent Volume (PV) and Persistent Volume Claim (PVC) process with a StorageClass allows the end-user to call the StorageClass directly from the PVC, which in turn creates the PV. The StorageClass itself is defined by the cluster admin.

It’s also worth mentioning that we are active collaborators on the Container Storage Interface (CSI) specification. CSI is an orchestration agnostic storage management framework for containers that eventually will mature into “one interface to rule them all”. We see Dory and the Docker Volume API (which is proven) as a perfectly good example on how you would deploy persistent storage for containers into production today.

While we encourage everyone to kick the tires on Dory, this project is supported only through GitHub on a best effort basis. Nimble Support is not able to assist with any issues. A fully supported version of the FlexVolume driver for HPE Nimble Storage and 3PAR will be released through the official channels later this year.

Don't forget to checkout the code at https://github.com/hpe-storage/dory

About the Author

mmattsson

Events
See posts for dates
See posts for locations
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
See posts for dates
Online
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all