- Community Home
- >
- Solutions
- >
- Tech Insights
- >
- End-to-end inferencing with HPE and NVIDIA
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
End-to-end inferencing with HPE and NVIDIA
Announcing availability of the NVIDIA A2 GPU on a wide range of HPE platforms. Combining the power of HPE servers and NVIDIA A2 GPUs will speed up inference at the edge without spiking power demands.
If you are attending NVIDIA GTC this year, you’ll be privy to insightful conversations around artificial intelligence and machine learning. We’re excited about the advancements around end-to-end inference that Hewlett Packard Enterprise and NVIDIA are delivering to enterprise customers.
As businesses explore and stake out competitive advantages in artificial intelligence, the growth in AI inferencing has exploded. Trillions of inferences each day require servers equipped with accelerators for applications like recommendation engines, natural language processing for sentiment analysis and speech recognition, and intelligent video analytics (IVA) for image classification, object detection, and facial recognition at scale and at the edge.
The edge is driving a great expansion in AI inference. According to IDC, 55 billion devices will be connected worldwide by 2025. Gartner notes that 50% of data will be created and processed outside of the traditional data center or cloud.
To help meet this growing demand, we’re announcing availability of a wide range of HPE platforms, powered by the recently announced NVIDIA A2 Tensor Core GPU, beginning in the first half of 2022.* Combining the power of HPE servers and the performance of NVIDIA A2 GPUs will speed up inference at the edge without spiking power demands.
HPE servers accelerated with NVIDIA A2 GPUs will deliver up to 37x higher inference performance versus CPU-only solutions and 30% higher video decode performance for IVA than previous GPU generations — all at an entry-level price point.**
Inference on the factory floor
The average vehicle has 35,000 parts. If any component fails, it can spark a nationwide recall. At least that’s what kept Kemal Levi up at night. Founder and CEO of Relimetrics, an HPE partner based in Germany, Kemal needed intelligent video analytics (IVA) on-site to identify and eliminate product defects. HPE created a solution stack capable of deploying IVA at the factory floor, enabling analytics at the edge to help reduce human error and increase production performance.
“By powering an HPE with NVIDIA GPUs, as well as further optimizing network performance with NVIDIA TensorRT software,” Kemal was excited to announce, “inference latency decreased from 4,165 to three milliseconds. This translates to almost 1,400 times better performance.”
Build your end-to-end inference stack
Balancing power, security, and performance requirements in an accelerated inference system requires tested and proven configurations. HPE systems that are NVIDIA-Certified bring together HPE servers and NVIDIA GPUs in optimized configurations that are validated for performance, manageability, security, and scalability, and are backed by enterprise-grade support from HPE and NVIDIA.
HPE ProLiant Gen10 Plus servers address security for data in use starting at the factory with silicon root of trust, then leveraging encryption at the memory layer and security configuration lock at the firmware level, along with a distributed services platform at the edge. The NVIDIA A2 GPU delivers secure boot through trusted code authentication and hardened rollback protections against malicious malware attacks, preventing operational losses.
Since the NVIDIA A2 is a low-profile PCIe Gen4 card running at 50W (estimate), it can be tested and qualified across HPE ProLiant, Edgeline, and Synergy servers.
Scalability is essential to an optimized inference stack. HPE can right-size the platform, no matter the workload in your environment. For local edge deployments, the HPE ProLiant DL360 Gen10 server, combined with the NVIDIA A2 GPU, drives inferencing with minimal power requirements. In the datacenter, you can maximize your inferencing power with the HPE ProLiant DL380 Gen10 Plus coupled with NVIDIA A30 GPUs, a configuration that optimizes AI workloads at scale.
Tip the Iceberg at GTC
Explore the technologies and use cases that lie beneath the surface of AI inferencing at GTC, including the recently announced NVIDIA Triton Inference Server software updates for model analysis and multi-node support for optimal inference deployments. HPE has a similar vision as NVIDIA to make AI accessible to every enterprise. The HPE Apollo and ProLiant platforms have been NVIDIA-Certified and support NVIDIA AI Enterprise (NVAIE). NVAIE provides an end-to-end software suite for the development and deployment of AI applications on VMware vSphere servers which can be deployed on HPE GreenLake services, hybrid, or on-premises.
Learn more about AI at these GTC sessions offered by HPE:
- How to Train and Deploy 1 Million+ Parameter AI Models: Learnings from a Hybrid Cloud Deployment
- Building Intelligent Data Pipelines for Predictive Analytics with HPE Ezmeral
*The NVIDIA A2 GPU will be available in HPE servers 1H2022. **NVIDIA A2 GPU performance is estimated.
Meet our Tech Insights bloggers
Kathy Carlson, GPU/Accelerator Options Product Manager, HPE
Kathy leads product management for GPU and accelerator options at Hewlett Packard Enterprise for HPE ProLiant and Synergy servers. She delivers solutions for strategic workloads in artificial intelligence inferencing, machine learning, graphics and visual computing, and virtual desktop to help customers improve business outcomes. Kathy received her B.S. in Electrical Engineering from Utah State University.
Tobore Imarah, Compute AI Solutions Product Manager, HPE
Tobore is a product manager in the mainstream compute workload solutions team focusing on artificial intelligence. His charter is to bring solutions to market that are optimized for performance and efficiency. Tobore received his B.S. in Industrial and Systems Engineering from Kennesaw State University and an MBA from Western Governors University.
Insights Experts
Hewlett Packard Enterprise
twitter.com/HPE_AI
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/artificial-intelligence.html
- Back to Blog
- Newer Article
- Older Article
- Amy Saunders on: Smart buildings and the future of automation
- Sandeep Pendharkar on: From rainbows and unicorns to real recognition of ...
- Anni1 on: Modern use cases for video analytics
- Terry Hughes on: CuBE Packaging improves manufacturing productivity...
- Sarah Leslie on: IoT in The Post-Digital Era is Upon Us — Are You R...
- Marty Poniatowski on: Seamlessly scaling HPC and AI initiatives with HPE...
- Sabine Sauter on: 2018 AI review: A year of innovation
- Innovation Champ on: How the Internet of Things Is Cultivating a New Vi...
- Bestvela on: Unleash the power of the cloud, right at your edge...
- Balconycrops on: HPE at Mobile World Congress: Creating a better fu...
-
5G
2 -
Artificial Intelligence
101 -
business continuity
1 -
climate change
1 -
cyber resilience
1 -
cyberresilience
1 -
cybersecurity
1 -
Edge and IoT
97 -
HPE GreenLake
1 -
resilience
1 -
Security
1 -
Telco
108