The Cloud Experience Everywhere
1767188 Members
3630 Online
108959 Solutions
New Article
ServicesExperts

The ML path from Day Zero to Production – and how HPE services can help

What’s involved in taking a machine learning project from starting point to value delivery, under the ML practitioner’s lens? Here’s a quick overview.

By Raffaele Tarantino, GTM Lead for AI & Data practice, HPE GreenLake, and

Christian Temporale, Senior Architect, HPE AI and Data Transformation Services

HPE-Pointnext-Services-AI-ML-technology-services-consulting.pngOrganizations go through different phases during their machine learning (ML) development lifecycle, which aims to generate insights (value) from the available data, which is central to all the activities. Two macro development cycles can be identified: Experiment and Production.

When the focus is on Experiment, teams are spending effort on data exploration, data preparation, model building, model training, model evaluation, pilot and model optimization (e.g. fine-tuning hyperparameters).

When the focus shifts to Production, the same models trained in the Experiment cycle are packaged and deployed into the production systems for serving and handling inference requests. The models’ performance needs to be monitored, and in cases where they show a degradation, they may need an update.

Depending on the performance and business expectations, the use case may undergo a complete re-iteration on the Experiment side, e.g. by introducing new ML algorithms, or leveraging additional data sources.

Finally, you’ll want to make an ethical use of AI, support trustworthy AI, and adopt end-to-end secure designs, from the very first bit of data generated to the applications access. Understanding the basis of bias is the starting point to move in this direction.

MLOps: an end-to-end lifecycle

As this process requires contributions from multiple teams, it is fundamental that people with different roles collaborate using the right tools and in a disciplined manner. It’s also fundamental that the various ML components seamlessly integrate in the MLOps platform.

Ingesting data and ensuring the right quality and integrity is the first step, as part of the Experiment phase, giving secure access to data. Every model development involves a considerable dedication of time and effort in the data preparation part. Once the appropriate ML techniques are chosen for the specific use case, different ML models are built by leveraging an ever-changing ecosystem of tools spanning open source projects and selected ISVs.

The next is the ML models training, with relevant tuning and optimization, making the most of distributed scalable computing resources. Model evaluation is key to assessing models across agreed performance metrics (e.g. accuracy) and business goals; top performing models are candidates for Production.

In the Production cycle, pipelines are leveraged to distribute packages across integrated platforms. The objective here is to get ready to make the solution available for the business, testing its serving capabilities and seamlessly maintaining a variety of models and versions. Different deployment models are possible, from operated cloud solutions to edge AI; the objective is to serve optimized models in the end-user environments, to monitor any drifts, and in general to track performance changes. In the case of anomalies, reaction time for detection/update provides competitive advantage when acting fast on data.

Kubeflow – a widely adopted open source project

In order to manage all the steps of the Model development lifecycle in a systematic manner, an MLOps framework is recommended, or even required.

At the moment, Kubeflow is the de facto standard for running ML workflows on Kubernetes. In addition, it’s the most popular open source framework, providing MLOps capabilities and leveraging an ecosystem of open source tools to manage all the steps of the model development lifecycle.

Notably, Kubeflow allows users to build an integrated end-to-end pipeline connecting all the functional components of the MLOps process. Kubeflow pipelines are portable and can run on heterogeneously-sized Kubernetes clusters: therefore, pipelines can be developed locally and migrated to Production when ready.

Kubeflow runs on any Kubernetes environment, no matter if it’s deployed on-premises or in the cloud.

Build value from Day Zero to Production – and beyond – with HPE services

As a strategic partner to our customers in their digital journey, HPE offers more than great technology.​ Our portfolio of products and services align to the major digital transformation initiatives around edge, data, cloud and security.

Digital transformation demands the right expertise and an understanding of how technology can deliver business outcomes – the kind of expertise we have across our services business.​ HPE can advise customers on the next steps of their transformation journey and map out the priority initiatives. We can implement technologies from HPE and our ecosystem, while addressing the people-and-process implications.​ And we can operate this technology footprint in hybrid cloud with HPE GreenLake edge-to-cloud platform, to support, manage and improve the digital capabilities that power your business. (Read more about HPE GreenLake MLOps)

HPE Advisory and Professional Services for Artificial Intelligence and Data can help accelerate your move from pilot to production, from edge to cloud, at scale. Per IDC analysis and customer feedback, we are positioned as a leader in the 2021 IDC MarketScape for Worldwide AI IT Services.

Read more about the new HPE Machine Learning Development Services and how they help smooth the transition of ML pilots from production to value delivery.

Click below for a video that explains how HPE services help you unlock the value of data from your connected world.video.jpg

 

Raffaele Tarantino.pngRaffaele Tarantino is the GTM Lead for HPE GreenLake's AI and Data practice. Raffaele is responsible for the go-to-market strategy of artificial intelligence and data transformation services at HPE, helping businesses unlock the value of data by democratizing the use of AI across organizations.

 

Christian Temporale.pngChristian Temporale is a Senior Architect of AI and Data Transformation Services at HPE. An experienced system architect and consultant, Christian works on projects and initiatives focused on AI and data analytics.

 


Services Experts
Hewlett Packard Enterprise

twitter.com/HPE_Pointnext
linkedin.com/showcase/hpe-pointnext-services/
hpe.com/pointnext

About the Author

ServicesExperts

HPE Services Team experts share their insights on the topics and technologies that matter most for your business.