Around the Storage Block
1825698 Members
3391 Online
109686 Solutions
New Article
MichaelMattsson

Container shows post-mortem: Red Hat Summit, DockerCon and HPE Discover

DockerCon 2018DockerCon 2018In preparation for a trade show there's a flurry of activities getting software releases ready to either ship or announce with a chance for customers and prospects to come visit our booths for a demo or deep dive discussion. Often we screencast and record demos that get played at the show and post-show the assets gets abandoned while we start rolling up our sleeves for the next activity on the calendar. Given that there's a fair amount of work going into creating these assets we've come around to tidy them up and published them on YouTube for broader consumption.

Red Hat Summit
We had a ton of demos in our booth for Red Hat Summit if you were interested in persistent storage for containers from HPE, including an in-booth presentation where we talked about the process of importing a VMware VMDK file into a native Persistent Volume using Red Hat OpenShift Container Platform and the HPE Nimble Kube Storage Controller. In a recent press release we announced our strengthened partnership with Red Hat by highlighing our end-to-end HPE solution with OpenShift, including HPE Nimble Storage and the HPE Synergy Composable Infrastructure platform complete with services from HPE PointNext. An elaborate reference archtitecture paper is slated for a September release. So, with this announcement we also got a chance to present at Red Hat Summit and deliver on our key message: Going from OpenShift POC to production: How to accelerate this path with HPE

For the demos at the show we created two completely new assets to highlight some of the most prominent use cases for persistent storage for OpenShift.

Boost developer productivity for mission-critical databases
In this use case demo we walk through the scenario of running a 800GB MariaDB on a production OpenShift cluster. With the help of a simple HPE Nimble Storage Protection Template we'll be able to replicate the database to a downstream HPE Nimble Storage array. Once the data has been replicated, using a separate dev/test OpenShift cluster, be able to use a clone of the replica for DevOps purposes and completely offload the production environment.

In the above example MariaDB is being used, the same principle can be applied to any database capable of running in container.

Deliver enterprise-grade persistent volumes for multi-tenant CaaS
Another use case demo shows how we can help IT Ops and Service Providers to partition storage resources such as capacity, IOPS and throughput on a HPE Nimble Storage array. It enables end users to be completely self-service on shared infrastructure resources for a cloud-like experience but still be in control of their own destiny. In the example we use two OpenShift clusters designated "prod" and "test" which in turn are capped at certain limits to ensure SLOs and SLAs can be met. Users can also specify their QoS Limits and capacity within their allotted budget.

HPE Nimble Storage is part of the OpenShift Primed program and just before the show we published an HPE Nimble Storage Integration Guide for Red Hat OpenShift and OpenShift Origin.

DockerCon and HPE Discover
There were two intense weeks in June where DockerCon and HPE Discover was executed one after the other. Looking ahead we can see that the same ordeal will go down in Europe for both the EU show counterparts. Stay tuned for more content after those shows! From a persistent storage for containers perspective, all the excitement was centered on the tech preview of the HPE Cloud Volumes integration. I wrote a blog post prior to DockerCon that talks about the integration at lengths what we've managed to accomplish with this new and exciting feature for HPE Cloud Volumes. The two demos below will spark an idea of what we'll be able to do in the public cloud by the end of the year.

HPE Nimble Storage Container Provider for HPE Cloud Volumes: Basic provisioning
This tech preview goes through a few basic examples on how to use the HPE Nimble Storage Container Provider (full product name TBD) for HPE Cloud Volumes. In the examples we're using Docker Enterprise Edition 2.0 on AWS and provision Docker Volumes both for legacy Docker Swarm and Kubernetes.

Note: There's also a narrated version of this demo available here!

HPE Nimble Storage Container Provider for HPE Cloud Volumes: CI/CD pipelines
What is unique about HPE Cloud Volumes compared to traditional EBS volumes is that we have the ability to instantaneously snapshot, clone and restore a volume very efficiently without involving S3 or any other copy offload. To get an idea of what we accomplish by leveraging this technology is that large amounts of data, used for a transactional workload in the below example, lose gravity. All of a sudden it's not really important if the database is a few megabytes or a few hundred terabytes. Snapshotting and cloning for auxiliary purposes such as CI/CD and ETL pipelines is downright trivial and done in seconds. The demo integrates snapshots and clones into a Jenkins pipeline to build, ship and test an API gateway sitting in front of a 800GB MariaDB. What is relevant to understand here is that this database could've been replicated from a HPE Nimble Storage array on-premises to HPE Cloud Volumes and at-will cloned into a new volume to be used on AWS for cloud bursting or cloud onramp use cases.

There are too many awesome use cases that containers and our integration enables, both HPE Cloud Volumes and traditional HPE Nimble Storage arrays. What is bogging down your project when it comes to persistent storage for containers? Challenge us in the comments below!

About the Author

MichaelMattsson

Data & Storage Nerd, Containers, DevOps, IT Automation