Tech Insights
1829563 Members
1485 Online
109992 Solutions
New Article
TechExperts

More choices to simplify the AI maze: Machine learning inference at the edge

Introducing Qualcomm Cloud AI 100 Accelerator for the HPE Edgeline EL8000 platform

Learn how HPE and Qualcomm are collaborating to tightly integrate and optimize the performance of HPE Edgeline systems with the new Qualcomm Cloud AI 100 accelerator. The accelerator is specialized for AI inference workloads to ensure customers can achieve insights quickly where their data is produced at the edge.

By John Schmitz, product manager, AI business unit, HPE

Thinking about artificial intelligence (AI) infrastructure can feel a bit like finding your way through a maze – winding your way from data collection to solution development, to creating value through new insights, equipped with the right servers, compute, and storage to help you on your way.

When designing infrastructure for your AI solutions, a helpful way to navigate the maze of choices is to begin with your end-to-end workflow – from data generation to solution deployment and value creation. Considering your requirements at each stage can make it easier to select the right infrastructure for your needs.

HPE AI WORKFLOW 2-BLOG.png

AI workflow

We at HPE can help along this journey by offering a broad portfolio of choices in servers and storage so that you can optimize your deployments ­– from data center to edge. The latest choice comes in the form of the first product server from HPE based on a specialized AI processor: the HPE Edgeline EL8000 platform with the Qualcomm® Cloud AI 100 accelerator. 

This new solution from HPE marks the next step in our efforts to help you succeed with AI-optimized infrastructure for end-HPE AI-Qualcomm-AI-Edge-GettyImages-967138238.jpgto-end and edge-to-core success. Last year, with the acquisition of Determined AI, HPE moved to help our customers accelerate AI innovation with fast and simple machine learning (ML) model development and training. Earlier this year, we released the HPE Machine Learning Development System, based on the Determined platform and offering a fully-integrated and supported solution for scaling model training and development needs.

While growth in AI development and training remains robust, industry projections show that the AI inference market will grow rapidly to tens of billions of dollars by the mid-2020s.[1] The release of the HPE Edgeline EL8000 platform with the Qualcomm Cloud AI 100 accelerator marks our increased focus on model deployment and inference.

Bringing performant, energy-efficient AI inference to the edge

Once you’ve trained your model, you’re ready to deploy it where you can use it to make predictions on new data. Inference workloads are often larger in scale than training workloads and frequently need to meet specialized requirements such as low-latency and high-throughput to enable real-time results. That’s why the best infrastructure often differs from what’s needed for development and training. Increasingly, customers are also considering a variety of processor options to help lower energy costs while maintaining performance.

To that end, we’ve collaborated with Qualcomm Technologies to bring you an inference solution that delivers insights with power-performant infrastructure that can also meet the unique challenges presented by edge deployments.

The edge is where it’s at

AI inferencing at the edge refers to deploying trained AI models outside the data center and cloud – at the point where data is created and can be acted upon quickly to generate business value. These edge AI solutions place the compute infrastructure closer to the source of the incoming data and closer to the systems and people who need to make data-driven decisions in real time.

We know you are looking for platforms that allow you to deliver value through applications utilizing sophisticated ML algorithms at the edge. Paired with the HPE Edgeline platform, the Qualcomm Cloud AI 100 enables compute-intensive AI at a significant reduction in total cost of ownership and in the compute density required for successful edge deployment, not to mention a boost in energy efficiency.

Learn more with these resources

Qualcomm is a trademark or registered trademark of Qualcomm Incorporated. Qualcomm Cloud AI is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.

[1] IDC, WW Artificial Intelligence Forecast, 2021


Meet blogger John Schmitz, product manager, AI business unit, HPE

John Schmitz-HPE.jpgJohn has more than two decades of experience with the company, leading product strategy, planning and marketing across infrastructure, software, and services portfolios. He is a graduate of Miami University of Ohio. Connect with John on LinkedIn: https://www.linkedin.com/in/john-a-schmitz/

 

Insights Experts
Hewlett Packard Enterprise

twitter.com/HPE_AI
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/artificial-intelligence.html

 

0 Kudos
About the Author

TechExperts

Our team of HPE and other technology experts shares insights about relevant topics related to artificial intelligence, data analytics, IoT, and telco.