HPE Storage Tech Insiders
Showing results for 
Search instead for 
Did you mean: 

The container paradigm shift


Operating system level virtualization have been around for over 30 years. From the novel UNIX command of 'chroot', to FreeBSD Jails, Solaris Zones to LXC. They all exercise the same core principles and similar system abstractions. I see two very similar patterns, when comparing virtual machines or virtual hardware to OS level virtualization. It has been around since the dawn of computing. Underneath both of these virtualization techniques there's a fairly complex interface which are not very operator friendly. What made great success in both was the ability to manage resources in elements more appealing to the end user. For virtual machines, that was the server admin who could point and click to provision a new server for their users instead of procuring, racking and imaging. With OS level virtualization, the application runtime environment becomes immutable, portable and easily distributed. Developers and operation teams are raving daily about the now common ground they've established as the common interface for building, shipping and running applications. Developers get their much needed agility to innovate and the operations teams focus on architecting for the future of a more secure, reliable and available data center for the software supply chain, which is the very essence of the business.

The software supply chain for a container has multiple facets. We’ve made sure that Nimble is capable of integrating across the entire lifecycle of the application and surrounding tools used to build, ship and run your application anywhere.

We announced a Docker Volume plug-in on the Nimble corporate blog for a wide audience. This blog post elaborates on the intersection and impact of our Docker Volume plug-in. For a comprehensive preview of the plug-in itself, please see the Tech Preview of Nimble Linux Toolkit 2.0: Docker plug-in.

Problem Statement

When a developer runs one brand of a type 2 hypervisor in a certain flavor on a laptop, it’s likely that it differs from the version running in the production environment and it most certainly will differ from the hypervisor being used for the company's public cloud initiative. Running a copy of the same VM in these three environments is an insurmountable feat. Still, developers are tasked with ensuring portability of an app across all of them. Configuration management tools have helped over the years to treat application infrastructure as code. They’ve been evolving to abstract diverse environments into a homogeneous entity but there will always be a risk of some drift that creates uncertainty. Hence, efforts to make the infrastructure immutable has become critical. This is where containers are today, a binary image that interacts with known interfaces that abstract every key component of the infrastructure to enable an app to run anywhere, whether it be on a laptop, on-prem or in a public cloud.

The Developers

It’s not difficult to be energized and engaged when, as a developer, you understand the insights and ease it takes to architect and build applications to run exclusively for containers. The application is described in a readable recipe and its dependencies to other dockerized applications may be described in a human-readable YAML file. These recipes are version controlled, peer reviewed and kept in central repositories to describe an app and its dependent services.

For example, here is a simple dockerized fileserver designed to serve data anonymously:


Fig. Dockerfile used to describe a rudimentary file server

The above snippet is called a Dockerfile and is the base recipe for all Docker images. On a high level, it uses the popular mini-Linux distribution Alpine Linux and installs Samba into that image, exposes the required ports and starts up the ‘smbd’ process when run. This is all that’s needed to build a Docker image that is ready to be pushed to Docker Hub or to an internal Registry and referenced by Docker hosts who want to start sharing files using that image.

Anyone who wants to run the image may inspect it with the native Docker tools to figure out how it’s used and what parameters are needed. If this application were to be deployed and used in a larger scheme, the intent may be defined more explicitly with the Docker Compose tool. Docker Compose is used to define application stacks with multiple Docker images, volumes and networks. Below is a basic example of how we would compose a simple file server based on the image above. In a microservice architecture, you would define a more elaborate architecture with databases, web servers and load balancers.


Fig. Docker Compose file used to describe environment to run your application

Now it’s getting very interesting, as the volume specification outlines the use of a “nimble” driver with a “Windows File Server” performance profile. Normally, developers don’t have access to their own arrays to sandbox their code in. That all changes with the Nimble Virtual Array which is available to Nimbe partner ISV enablement programs. Developers may run their own private array on their laptops or on a centralized virtual infrastructure for dev and test. It’s important that the composition of the app that runs on the developer’s laptop is portable verbatim in the software supply chain. Having different volume compositions for an application throughout the lifecycle introduces risk and uncertainty, as the compose file won’t be the same as the one that the developer used to build the application. Docker Compose has the ability to layer compose files for different environments but it’s not very practical. Portability is a key principle for the entire application stack, volumes included.

The Operations Team

Containers do not remove the basic need for compute, network and storage. However, they do abstract it in such a way that operations teams may manage, secure and improve infrastructures at their own pace while not worrying about application dependencies other than the container interface. The operations team will still be instrumental in defining and architecting robust CI/CD (Continuous Integration/Deployment) workflows. CI/CD workflows ensure that the software supply chain does not hinder innovation. It’s crucial that they are able to respond to rapid changes and scale infrastructure services according to demand changes from both consumers and producers.

Nimble Storage arrays play an integral role in the modern data center, including container centric deployments. Defining and deploying stateful applications with strict storage requirements, whether it’s performance, capacity or availability, are no longer a challenge. The very same Docker Compose file defined by the developer may be deployed straight into Docker UCP (Unified Control Plane) which is part of the Docker Enterprise offering Docker Datacenter. The Nimble Storage Docker plug-in leverages the underlying Docker Swarm clustering technology and mounts the Docker Volume on demand where the container is scheduled to run.


Fig. Create a new application with Docker UCP, part of Docker Datacenter

Container-as-a-Service (CaaS) is a key focus area for many Enterprises aspiring to adopt an agile software supply chain, as well as cloud native companies that want to take control over their costs. Nimble Storage arrays are native to these environments with a comprehensive REST API and industry leading analytics through InfoSight, which are key capabilities for any CaaS or PaaS deployment.

Public Cloud

Today, the public cloud is at the forefront of innovation. Developers may start small for pocket change and grow application and storage needs as their organizations grow. Success always comes with a price and running all storage centric applications on a $/GB tab might not pan out as expected. Businesses are faced with the challenge of migrating data and applications on-prem to tackle the rampant cloud utility bill. Cost is not the only challenge. Reliability, availability and the security of business data may also play a huge part in the decision making process.

Public cloud native developers have no issues with swiping their credit card to get their projects off the ground.  Before you know it, your data is at risk while the IT department is trying to make heads or tails of their own PaaS offering.


Fig. Deploying Nimble Storage with Direct Connect to Public Cloud

Most public cloud providers today offer a “Direct Connect” service where a company’s infrastructure can be co-located within close proximity to a provider or to establish connectivity with its data center. It may not be practical for storage centric applications to run on the latter option because of the extra latency. There are huge advantages to leveraging a BYOS (Bring Your Own Storage) approach that is close to your applications for cost savings and control.

Running containers in the public cloud in large instances instead of multiple instances has a proven track record of cost savings. The Nimble Storage Docker plug-in is easily installed onto a cloud instance and can  provision Docker Volumes as if they were coming off an on-prem array or a virtual array off of a developer’s laptop.

Portability of storage-centric applications is as important as the data itself within stateful containers and solutions from Nimble Storage make this portability possible.

0 Kudos
About the Author


Data & Storage Nerd, Containers, DevOps, IT Automation