AI Insights
Showing results for 
Search instead for 
Did you mean: 

Containers and coded infrastructure: Friends or foes?


By: Chris Riley.

Just when you put the ribbon on top of your latest software delivery optimization, a new tool or process comes along that can either do it better or replace it altogether. The latest tool is containerization. Container technology seeks to streamline many aspects of the software delivery chain infrastructure, code packaging, and provisioning of environments of all types. It seems to fulfill the dream of many developers in which infrastructure gets out of the way and so does IT. But do containers actually displace existing infrastructure automation tools?

Coded infrastructure is a mature practice, and the scripting tools used to do it are well known. The power of coded infrastructure is undeniable. It not only saves time but also builds in consistency, supports change management initiatives, and makes it easier to know what you have out in the wild. However, in most organizations, developers don't own or control the scripts used to provision infrastructure, and often they don't even have the ability to utilize them.

In other organizations, developers do have access Containers-and-coded-infrastructure.pngto orchestration tools: they can grab scripts and provision their own virtual machines (VMs) based on them, but the process is too slow and the VMs too heavy. With the rise of container technologies like Docker and CoreOS, a new option has made it even easier. By simply pulling an image from a public or private registry and modifying it, a developer can have a perfect development environment that can be provisioned in minutes. Images can be used, thrown away, and used again.

New challenges

One might think that containers would replace VMs and infrastructure scripting, but this isn't the case. The combination of coded infrastructure and containers is a powerful one. Combining the two can enhance the communication between dev and IT and solve some existing challenges, but there are also challenges that containers introduce, especially when they lack good orchestration principles:

  1. Visibility: There are two elements of container technology that make it hard to know what's going on within individual images and even more so in the larger population. First, they're so easy to provision that the frequency fosters human error, and human error introduces random variables at a rapid pace. Second, while it's easy to manage configurations once they're provisioned, images often start from the public hub and are adapted many times over, layer by layer. The result is that a small change on a developer's local machine, such as testing a beta framework, can inadvertently make it into production images. This is a huge risk that leaves developers unaware of what's out in the wild and exposed to unknown risks.

  2. Smaller footprints: Container technology today is best suited for application front ends, rather than back ends, which means the larger components of an application still need to live on a VM. This means that the provisioning of those VMs is the slowest part of application releases. Container technology will likely become more robust for back ends as well, but for now infrastructure as code is still the best way to provision them.

  3. Host machines: Container technology has to run somewhere, typically on host machines or VMs, which can even be bare metal. This can seem trivial to developers because the host machine is their local machine. However, in production and the environments before it, how host machines are provisioned and connected is critical to the success of the containers running on them. In order for containers to expand to production use cases, being smart about the orchestration of host machines is critical.

  4. Consistency: Similar to visibility, the consistency of what's on the containers is limited today and is often a guessing game. With scripting, you can at least build consistency in the host machines. This can include some components of the application and activities like logging. It's also possible to use infrastructure as code to do the actual provisioning of the base container images, which makes them as consistent as any other scripted machine.

The best of both worlds

It's not likely that infrastructure as code will be used exclusively to create the images themselves, as that defeats some of the value. This will likely be addressed with other common approaches, such as component monitoring solutions and better private registries.

For now, infrastructure as code is a necessary part of building consistency and visibility into container technology, which today is very good at moving forward but not so good at creating a sustainable environment.

It's infrastructure as code's ability to impose consistency with versioned scripts and allow teams to know what's on images before they're provisioned that complements containerization and addresses all the challenges above. This is arguably the only way to make container technology a production-ready solution.

Related content:

The 2017 state of containers: A planning checksheet for enterprise IT


0 Kudos
About the Author