- Integrated Systems
- About Us
- Integrated Systems
- About Us
Start Your Secure Multitenant Containers Journey
As interest in containers and Kubernetes grows, so do the misconceptions around multitenancy. Containers are secure, if you know how to do it right. The challenge is that a lack of knowledge can result in insecure container deployments.
Multitenancy is used to keep activities--and applications--apart. For example, different parts of a business may share a compute resource. To avoid errors, keep data secure, and protect applications, multitenancy controls are commonly used in a large enterprise with multiple different business units. Additionally, service providers will use multitenancy to keep their customer workloads and data apart. This practice is a fundamental part of enterprise security.
The importance of separation to improve security
Since the early days of cloud, cloud security has come a long way. The public cloud vendors are motivated by the nature of their business to set the gold standard in multitenancy. Indeed, if security and multitenancy controls fail spectacularly in a public cloud service, these vendors might well be out of business. Their gold standard works well for applications that are VM-sized, do not require dedicated hosts, will readily scale out, and support multiple users on one server.
Just as public clouds need multitenancy, so do enterprises. Separating major systems and functions will help prevent accidental exposure or deliberate attacks compromising data and applications. Separation may be between business units, major application clusters, and geographies, as well as across functions from development and testing to production. Unfortunately, separation will also limit the sharing of data across an enterprise; hence, part of the enterprise multitenancy story is the need to enable trusted data sharing with a common data platform.
Containers open up a new set of issues because they can result in more microservices being deployed, along with new control planes and new middleware stacks. Secure container multitenancy brings a new challenge with both similarities and key differences to cloud. The public cloud vendors strive to build a gold standard for containers. However, as I will demonstrate later in this blog, this comes at a cost because a multitenancy master controller in the public cloud is seen as an undesirable competitive control point by enterprise customers.
Multitenancy for containers should be part of the overall secure system design, and it relies on some concepts from multitenancy in the cloud. Above all, multitenancy depends on the system security working properly to deliver secure containers, storage, and networking for the enterprise. If system security fails, then multitenancy will be compromised, as they are tightly interconnected.
Multitenancy – more than just privacy
Beyond the simple requirement for privacy, a wide range of requirements is placed on multitenancy design. An extensive risk/threat analysis shows there is no one right answer to container multitenancy as risk and remediation depend on the enterprise use case. Simply suggesting multitenancy is about Kubernetes namespaces, cgroups, and resource allocations are dangerously naive. This assumption omits both the wider system issues and finer control available in Kubernetes, which uses historically good practices for containers that Kubernetes can improve on. For example, a secure identity framework to work with the Kubernetes role-based access control and the economic reasons to use QoS controls is needed.
At one extreme, enterprise use cases ask for hard multitenancy that demands physical separation/air gaps, and at the other extreme, it demands soft multitenancy often seen in development and education, implemented with the minimum of constraints to permit maximum flexibility. Between the two extremes will be the majority of deployments—which is subtly different from cloud where a more generic approach to multitenancy is maintained.
One misunderstanding enterprise use cases expose is the suggestion that containers always need to be placed in a virtual machine to be secured. This suggestion is misleading. Yes, in some enterprise use cases (very small systems and where exacting isolation requirements are required), this is a good practice. But in general, the virtual machine is not required. I agree containers have the risk of kernel escapes and panics, but virtual machines also have a risk of escapes and hypervisor failure.
Simple observation suggests that with fewer moving parts, the threat surface for containers is smaller. While NIST’s Application Container Security Guide highlights the risk of using a shared kernel, David Lawrence, the head of security at Docker suggests, “…containers done right are much more secure than VMs”. Looking at multiple enterprise use cases, these seemingly opposing views illustrate the need to define the levels of risk and trust for an enterprise use case. For many, fewer moving parts and the opportunity to avoid some VM tax per container is a big win.
The role of Kubernetes in the multitenancy process
While containers have existed for over 20 years, the recent emergence of a dominant container control plane for a cluster of containers— Kubernetes—is now accelerating adoption. Kubernetes brings its own set of challenges since the multitenancy must be correctly declared as it is not implemented in a set of processes. It will require knowledge of Kubernetes to specify the desired state. While some parts can be inferred from virtual machine-based multitenancy, how it works for containers needs a working understanding of the Kubernetes API. Kubernetes multitenancy design is specific to Kubernetes.
The new thinking can be illustrated by looking at VM and container application dependencies. With VMs, OS patches can still be deployed to support an application. With containers sharing an OS, this is not possible, because the remediation is delegated back to the container developer for attention.
The richness of Kubernetes API will allow multi-strength, multitenancy container clusters where different isolation requirements are met by physical or virtual means. While physical and virtual isolation can be achieved with virtual machines, the current practice in public cloud is toward homogeneity, which leads to pragmatic constraints seen in tee shirt sizing. In hybrid cloud, multi-strength multitenancy means there is even less of a reason to move masses of data to centralized cloud-based applications when the container-based applications can go to the data running in a local edge physical cluster. A centralized control point and a multi-cluster controller of multiple Kubernetes clusters enable the distribution of applications across edge, core, and cloud. That same controller can manage multiple strengths of multitenancy and security across an enterprise.
With a centralized control point/controller of multiple Kubernetes clusters, it is important to step back and look at the system-wide security issues. If they fail, the multitenancy may also fail, so how can excellence be achieved? The answer to this question will require additional components.
Actively reducing threats
In an enterprise, the following will all reduce threats to an environment: a multi-cluster controller, an identity framework to identify and secure workloads, a secure enough repository for credentials, registry management, a common data platform, and trustable compute and storage nodes with connectivity. Enterprises may already have existing capabilities for the first two that can be reused; other capabilities may be new or require special process actions to create trust.
- Multi-Kubernetes cluster controller/control plane:
An example is the HPE Container Platform. This multitenant solution can manage multiple Kubernetes clusters in enterprise-scale container deployments across public cloud and private cloud/on-premises Kubernetes instances, a concept with parallels to public cloud availability zones. The HPE Container Platform can also work with a variety of Kubernetes releases, which will be vital to managing new and older developments in CI/CD. Public cloud providers also have multi-cluster controller capabilities, but these can put control of an enterprise’s container management outside the enterprise. Public cloud may also propagate data lock-in.
- Identity framework for workloads:
Secure Production Identity Framework For Everyone (SPIFFE) can provide identities and obtain/validate IDs essential to system security where SPIFFE defines how to establish trust between workloads in a distributed software system at scale. The SPIFFE run time environment has an API where a workload asks “Who am I?” and, after checks to attest the workload, returns the identity and provides keys to prove itself to others.
- Secure credentials repository:
An example is HashiCorp Vault, a secure credentials repository used to provide current secrets throughout the end-to-end solution. While holding them all in one place might sound risky, this is far better than multiple mechanisms throughout a system, which is difficult to maintain and has the risk of exposing secret information by error. Also centralizing the information means that credentials can be regularly changed and quickly/effectively revoked.
- Registry management:
Importing for using a corrupt container image is dangerous. The design must consider what registries are trusted, how images can be scanned, how images are updated, and what can be placed in a local repository.
- Common data platform:
An example is the HPE Data Fabric (previously known as MapR Data Platform). Quality data access across an enterprise is key to both agility and velocity, but it is typically in different forms and places. Making the enterprise data readily available to all enterprise applications also permits the establishment of consistent data access control critical to security and multitenancy.
- Secure compute and storage:
Besides trust in the software environment, the infrastructure in the nodes must be secure. The familiar advice to minimize the size and harden the OS (kernel) remains critical. A compromised node can fundamentally break system security and multitenancy. Therefore, IT infrastructure and operations must ensure a safe and reliable foundation including the ability to attest the compute node infrastructure. Key foundations for this lie in Trusted Platform Modules (TPMs), which can be used to verify the integrity of a platform. Examples of this capability include HPE iLO, silicon root of trust, and AWS server attestation.
- Trustable connectivity:
Perimeter security is no longer sufficient to protect applications and data from attackers and misuse. As a result, zero trust networks have developed in the last 10 years where every host must assume they are on the public internet and secure all connectivity appropriately. Operation zero trust uses mutual Transport Layer Security, which implies strong authentication of parties and encryption of all traffic for all network connections. To minimize the risk of error, this requires a control plane as, for example, where Kubernetes might use a service mesh like Istio (While flawed perimeter security still has a role to minimize external traffic in a Kubernetes cluster reducing the risk of attack and denial of service).
Getting help from the experts
With many moving parts (especially when container multitenancy relies on system security), the design challenge is significant. Using external advisory services (such as HPE Pointnext Services) is advisable during fundamental design. Small teams will struggle with the complexity of the system design and the different thinking required with zero trust and declarative approaches -- leading to risk to the enterprise. A small cost at the beginning of the process will undoubtedly accelerate success. Equally, the deployment of a multi-cluster controller in an enterprise may need support, especially with managing container and Kubernetes sprawl.
Multitenant and secure container deployments are happening, but they need to be designed correctly for an enterprise use case. Keep in mind that one design used for all deployments is not possible—over design costs too much, under design introduces risk. The enterprise design must also consider all parts of the system and may draw in components beyond a Kubernetes control plane, such as components that create trustable connectivity.
Today, while the component pieces exist, secure multitenant system design is challenging. Using good separation principles, users can safely share a common platform but this requires care in the design and operations.
For more information, visit the HPE Container Platform web page. You can also learn about best practices in a new report on enterprise container adoption, challenges, and opportunities: Expert advice on containerization in enterprise IT.
Hewlett Packard Enterprise
Colin I’Anson is a highly experienced researcher in big system design. Founded on understanding enterprise architecture principles, system security and creating end-to-end system design, his expertise ensures HPE solutions deliver valuable customer outcomes, integrate partners, and align product and service capabilities. Central to this goal is an understanding of applications and data in the main industry verticals.
- on: Analytic model deployment too slow? Accelerate dat...
- Jeroen_Kleen on: Introducing HPE Ezmeral Container Platform 5.1
- LWhitehouse on: Catch the next wave of HPE Discover Virtual Experi...
- jnewtonhp on: Bringing Trusted Computing to the Cloud
- on: Leverage containers to maintain business continuit...
- on: How to accelerate model training and improve data ...
- vanphongpham1 on: More enterprises are using containers; here’s why.
- on: Machine Learning Operationalization in the Enterpr...