HPE Ezmeral: Uncut

Why containers for deep learning?

Why-Containers-Deep-Learning blog.jpg

With every digital transformation comes an opportunity to start fresh and adopt new technology. In the case of deep learning (advanced AI) projects, organizations are presented with a familiar choice: continue investing in stale technology or pay the up-front cost of learning something new. In this article, I propose three reasons to adopt containers for deep learning projects and discuss the benefits in detail.

What are the benefits of containers, in the context of AI?

Clearly, containers are here to stay. As discussed previously in this blog, organizations are now more concerned with adoption at scale, rather than exploration when it comes to container technology. But are containers a good fit for AI projects? And when exploring new a technology like deep learning, is it a good time to learn about containers too? I believe the answer is yes. Let me explain.

Fundamentally, containers provide three high-level benefits to application developers. These benefits have been discussed on many blogs, but they’re worth repeating:

  1. Simplicity – containerized applications are portable and can run on almost any infrastructure.
  2. Scalability – containerized applications can scale up and down easily and change over time.
  3. Performance – containerized applications can be distributed and use infrastructure efficiently.

In a traditional environment, for example, when building a new mobile application, these three reasons can be thought of in order of priority: developers want to build new applications quickly, scale capacity up and down, and ensure everything is stable and efficient. However, when we think about a new deep learning project, priorities change.

Effectively, I would propose that data scientists, analysts, and engineers, have a different view when working on a new deep learning project. In order of priority, data teams care about:

  1. Performance – getting the most out of an infrastructure investment, promoting creativity
  2. Evolvability – testing and adopting new models, datasets, and frameworks over time
  3. Simplicity – making sure systems aren’t idle and users can collaborate effectively

While the result is the same (a big thumbs up for container adoption), the reasoning is slightly different. And of course, the underlying features of a containers-for-AI product should look quite different than a containers-for-mobile app development product. Dejay Noy discusses the topic further here, if you would like to read more. For the scope of this blog, I will move on to a new question.

Are containers worth the effort?

My proposed answer to the question, “Are containers worth the effort?” is probably obvious – YES. Let’s discuss this in detail.

It’s important to adopt containers for the right reasons and to know what those reasons are. As proposed above, containers are worth the effort if they improve the performance, evolvability, and simplicity of the development environment. (Note: using containers as a fashion statement is not a success pattern we have seen when it comes to a deep learning project). So how do containers do this?

First: Performance

New technology environments are being built in organizations around the world to solve complex technical challenges. They use many compute nodes to work on a relatively small quantity of problems and distribute the processing and data required to do so in a parallel manner. They typically use a high-speed interconnect to pass data between nodes and attempt to make efficient use of hardware (usually running a Linux Operating System). So, what’s the workload? Are we talking about machine learning? HPC modelling and simulation? Big data analytics? The answer is yes.

Clearly, all three classes of workloads (and the users that run them) are very sensitive to performance – specifically performance-per-dollar, as we often see it characterized. By getting the most performance out of their technology investment, users can improve creativity, collaboration, and reduce wasteful idle time. Containers can help with that.

Second: Evolvability

Many people characterize this feature of containers as scalability, since it implies that containers can quickly be deployed on more resources to increase or decrease capacity on demand. This is a great feature (and many developers care about it), but it’s not the whole story. When thinking about a deep learning project, the ability to adopt new models, datasets, frameworks, and then test their effectiveness is key. Importantly, the same feature of containerization is being exercised – the ability to deploy quickly. But if there is a hard resource constraint in the environment, it’s not about scaling the same service up and down on demand, it’s about scaling an old service down and a new service up. Containers can help with that.

Third: Simplicity

Echoes of the last point are included here since simplicity is all about making it easier for users to interact with infrastructure. Helping users move quickly, avoid headaches, and reduce idle time can not only make the use of infrastructure more efficient, but actually make users more creative. And in turn, it can make the project team as a whole more productive.  Containers can help with that.

Conclusion: How to get started?

As a closing thought: many organizations end up looking at the containers-for-AI choice from a purely financial perspective, effectively treating the new vs. old technology decision discussed above as a CapEx vs. OpEx decision. The reality couldn’t be further from the truth at HPE.

Case in point: we recently announced HPE GreenLake for MLOps – an enterprise-grade cloud service for machine learning that is fully managed for you on premises. Since the solution is consumed on a pay-as-you-go basis, it helps alleviate concerns around up-front cost, while still providing long-term operational benefits. And since it is delivered on-premises, organizations are able to re-use existing data lake investments, avoiding costly (and risky!) migrations.

Overall, HPE is excited to help customers adopt the necessary technology to make deep learning projects successful – from application containers to pay-as-you-go financial models and beyond. The journey to industrialize ML projects is just beginning!

Interested in exploring more about how containers can help with your digital transformation? Reach out to me by leaving a comment below, or take a test drive of the HPE Ezmeral Container Platform here: http://www.hpe.com/demos/ezmeral

Related articles:


Jordan Nanos

Jordan Nanos HPE.jpgFor the last 5 years, Jordan has been helping HPE customers design and implement systems to support their most complex workloads – from IoT & Big Data to HPC & AI. Currently, as a Sales Engineer for HPE Ezmeral software, he helps customers across Canada get the most out of their investments in specialized technology. Jordan is most excited about the opportunity to help users adopt Deep Learning across their organization and unlock the power of their data assets. 




Hewlett Packard Enterprise





0 Kudos
About the Author