Around the Storage Block

HPE CSI Driver for Kubernetes 1.2.0 available now!

HPE CSI Driver for Kubernetes 1.2.0 released!HPE CSI Driver for Kubernetes 1.2.0 released!In the fast paced and explosively growing Kubernetes community, it has become important to keep up with release cadence. Customers and partners alike bring their applications to life on Kubernetes, and persistent storage for both modern and legacy applications is key to drive adoption of cloud native patterns in business operations.

Recently, Hewlett Packard Enterprise (HPE) released the HPE CSI Driver for Kubernetes 1.2.0 with new features and broader partner ecosystem support. The HPE CSI Driver for Kubernetes is a multi-platform and multi-vendor CSI driver that currently supports HPE Nimble Storage and recently announced HPE Primera and HPE 3PAR. The CSI driver's simple architecture allows integration of block storage systems into Kubernetes without the user having any prior knowledge of Kubernetes. More information about the fundamental architecture may be found in this blog post on HPE DEV.

Keeping up with the community

Upstream CSI features mature at different paces, and HPE is keeping a close watch on interesting capabilities that help unlock use cases for customers. Kubernetes 1.18 marked raw block volume support that was generally available (GA). The HPE CSI Driver for Kubernetes 1.2.0 now includes full support for raw block volumes along with supporting the beta functionality of ephemeral inline volumes.

HPE CSI Driver for Kubernetes. New features highlighted!HPE CSI Driver for Kubernetes. New features highlighted!


For the latest and most up to date information, make sure to visit the HPE storage container orchestrator documentation (SCOD) portal.

Raw block volumes

Prior to the introduction of raw block volumes on Kubernetes, every Persistent Volume Claim assumed a filesystem being mounted inside the Pod that, in turn, assumes a traditional POSIX interface to do file IO. This is still the default behavior. With the raw block volume capability, an end-user may request storage with the attribute “volumeMode" set to “Block” in the Persistent Volume Claim. Instead of a filesystem in the Pod, a raw block device is exposed, which in turn yields better performance for applications that can take advantage of it.

Applications that are capable of maintaining consistency at application layer may leverage the same raw block volume across multiple Pods in what is called ReadWriteMany, or RWX, in Kubernetes terms, which give means to build and run traditional HA applications on Kubernetes where device IO is coordinated. While raw block volumes is a fairly new concept in Kubernetes, we may see an uptick in adoption for databases capable of addressing raw block volumes, such as Oracle, DB2 and MySQL/MariaDB.

Ephemeral inline volumes

The concept of a "scratch disk” is nothing new. Inside a container, everything is essentially a scratch disk. The concept of an ephemeral inline volume allows users to create a temporary volume of predetermined characteristics that get deleted when a Pod that requested the storage goes away. The “inline” part is what determines that the storage resource is declared with the Pod itself and does not follow the traditional StorageClass, Persistent Volume Claim and Persistent Volume paradigm.

The need for this functionality is multi-faceted. In traditional compute jobs which run inside containers, there are no boundaries or constraints on how much space they may consume or traditionally shared with the host and the other workloads running on that particular node. Declaring an ephemeral inline volume for the compute will not only allocate the storage for the job outside the node itself, as in the case of using the HPE CSI Driver for Kubernetes, but it will have a volume boundary that will only affect that one container if it’s oversubscribed.

Another use case is when a Pod requires a pre-determined dataset, populated from a different volume, to run its compute job. This is where the unique differentiation in the Container Storage Providers (CSPs), supported by the HPE CSI Driver for Kubernetes, steps in as the parameters supported in the platform StorageClass may be called upon in the declaration. For example, a user may clone an existing Persistent Volume and cap it at 10,000 IOPS with the HPE Nimble Storage CSP. A pre-determined dataset with pre-determined boundaries not only enables capacity planning, but also runs compute jobs very efficiently, as the HPE Nimble Storage would be able to handle hundreds of clones for a moderate sized Kubernetes cluster.

For practical use of raw block volumes and ephemeral inline volumes, see Using Raw Block and Ephemeral Inline Volumes on Kubernetes on the HPE Developer Community (HPE DEV).

Expanded ecosystem support

The full support matrix for the release of the HPE CSI Driver for Kubernetes is hosted on SCOD. Highlights for the release includes support for Kubernetes 1.18 and the HPE CSI Operator for Kubernetes which is now certified for Red Hat OpenShift.

HPE CSI Driver for Kubernetes Support MatrixHPE CSI Driver for Kubernetes Support Matrix

As HPE continues to bring innovative integrations to Kubernetes we can expect to see the footprint increase over time. We have huge interest in our customer and channel partner community to support a broader set of technology partners, platforms, and couple together more HPE products and services on Kubernetes with applications and persistent storage as the focal point.

Get started

The HPE CSI Driver for Kubernetes 1.2.0 is available immediately for HPE Nimble Storage, HPE Primera, and HPE 3PAR. It’s installable with either Helm from or as an Operator from Release notes are available on GitHub, and as always, keep SCOD bookmarked for the latest updates on everything related to HPE storage for Kubernetes.

As always, thanks for following my blog articles. If you have questions, you can leave comments below, and I promise to get back to you. Stay tuned for the next installment in this series, “Tech Preview: HPE CSI Driver for Kubernetes NFS Server Provisioner”.

About the Author


Data & Storage Nerd, Containers, DevOps, IT Automation