- Community Home
- >
- Solutions
- >
- Tech Insights
- >
- More choices to simplify the AI maze: Machine lear...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
More choices to simplify the AI maze: Machine learning inference at the edge
Introducing Qualcomm Cloud AI 100 Accelerator for the HPE Edgeline EL8000 platform
Learn how HPE and Qualcomm are collaborating to tightly integrate and optimize the performance of HPE Edgeline systems with the new Qualcomm Cloud AI 100 accelerator. The accelerator is specialized for AI inference workloads to ensure customers can achieve insights quickly where their data is produced at the edge.
By John Schmitz, product manager, AI business unit, HPE
Thinking about artificial intelligence (AI) infrastructure can feel a bit like finding your way through a maze – winding your way from data collection to solution development, to creating value through new insights, equipped with the right servers, compute, and storage to help you on your way.
When designing infrastructure for your AI solutions, a helpful way to navigate the maze of choices is to begin with your end-to-end workflow – from data generation to solution deployment and value creation. Considering your requirements at each stage can make it easier to select the right infrastructure for your needs.
AI workflow
We at HPE can help along this journey by offering a broad portfolio of choices in servers and storage so that you can optimize your deployments – from data center to edge. The latest choice comes in the form of the first product server from HPE based on a specialized AI processor: the HPE Edgeline EL8000 platform with the Qualcomm® Cloud AI 100 accelerator.
This new solution from HPE marks the next step in our efforts to help you succeed with AI-optimized infrastructure for end-to-end and edge-to-core success. Last year, with the acquisition of Determined AI, HPE moved to help our customers accelerate AI innovation with fast and simple machine learning (ML) model development and training. Earlier this year, we released the HPE Machine Learning Development System, based on the Determined platform and offering a fully-integrated and supported solution for scaling model training and development needs.
While growth in AI development and training remains robust, industry projections show that the AI inference market will grow rapidly to tens of billions of dollars by the mid-2020s.[1] The release of the HPE Edgeline EL8000 platform with the Qualcomm Cloud AI 100 accelerator marks our increased focus on model deployment and inference.
Bringing performant, energy-efficient AI inference to the edge
Once you’ve trained your model, you’re ready to deploy it where you can use it to make predictions on new data. Inference workloads are often larger in scale than training workloads and frequently need to meet specialized requirements such as low-latency and high-throughput to enable real-time results. That’s why the best infrastructure often differs from what’s needed for development and training. Increasingly, customers are also considering a variety of processor options to help lower energy costs while maintaining performance.
To that end, we’ve collaborated with Qualcomm Technologies to bring you an inference solution that delivers insights with power-performant infrastructure that can also meet the unique challenges presented by edge deployments.
The edge is where it’s at
AI inferencing at the edge refers to deploying trained AI models outside the data center and cloud – at the point where data is created and can be acted upon quickly to generate business value. These edge AI solutions place the compute infrastructure closer to the source of the incoming data and closer to the systems and people who need to make data-driven decisions in real time.
We know you are looking for platforms that allow you to deliver value through applications utilizing sophisticated ML algorithms at the edge. Paired with the HPE Edgeline platform, the Qualcomm Cloud AI 100 enables compute-intensive AI at a significant reduction in total cost of ownership and in the compute density required for successful edge deployment, not to mention a boost in energy efficiency.
Learn more with these resources
- Web page: Qualcomm Cloud AI 100
- White paper: HPE Edgeline inference platform with Qualcomm Cloud AI 100 accelerators
- Solution brief: Speed AI results with HPE Edgeline and Qualcomm Cloud AI 100
- Blog: Ready to navigate the AI accelerator landscape?
Qualcomm is a trademark or registered trademark of Qualcomm Incorporated. Qualcomm Cloud AI is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.
[1] IDC, WW Artificial Intelligence Forecast, 2021
Meet blogger John Schmitz, product manager, AI business unit, HPE
John has more than two decades of experience with the company, leading product strategy, planning and marketing across infrastructure, software, and services portfolios. He is a graduate of Miami University of Ohio. Connect with John on LinkedIn: https://www.linkedin.com/in/john-a-schmitz/
Insights Experts
Hewlett Packard Enterprise
twitter.com/HPE_AI
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/artificial-intelligence.html
- Back to Blog
- Newer Article
- Older Article
- Amy Saunders on: Smart buildings and the future of automation
- Sandeep Pendharkar on: From rainbows and unicorns to real recognition of ...
- Anni1 on: Modern use cases for video analytics
- Terry Hughes on: CuBE Packaging improves manufacturing productivity...
- Sarah Leslie on: IoT in The Post-Digital Era is Upon Us — Are You R...
- Marty Poniatowski on: Seamlessly scaling HPC and AI initiatives with HPE...
- Sabine Sauter on: 2018 AI review: A year of innovation
- Innovation Champ on: How the Internet of Things Is Cultivating a New Vi...
- Bestvela on: Unleash the power of the cloud, right at your edge...
- Balconycrops on: HPE at Mobile World Congress: Creating a better fu...
-
5G
2 -
Artificial Intelligence
101 -
business continuity
1 -
climate change
1 -
cyber resilience
1 -
cyberresilience
1 -
cybersecurity
1 -
Edge and IoT
97 -
HPE GreenLake
1 -
resilience
1 -
Security
1 -
Telco
108