- Integrated Systems
- About Us
- Integrated Systems
- About Us
How HPE is powering AI and machine learning with NVIDIA
Learn what HPE is doing with NVIDIA-powered AI solutions to address the building, deployment, and operationalizing challenges enterprises are facing in hybrid cloud and at the edge.
Our partner NVIDIA recently introduced several new technologies, all designed to broaden the use of GPU accelerated technologies. And, as has been the case for several years now, artificial intelligence (AI) use cases will continue to fuel the widespread adoption of these new technologies.
Last August, HPE captured a few of the numbers that reflect AI's momentum. Since then, in a 2020 report, Cognilytica released survey findings that over 34% of respondents plan on implementing their first AI application during the next 24 months. And KPMG found that AI has moved from "Technology to Watch" to "Technology to Adopt." In concurrence, 451 Research noted that, "Budgets will likely expand as enterprises look to procure products and services that turbocharge their IT environments dedicated to AI."
In my previous role at Cray, before we joined HPE, I spearheaded a project to survey enterprises about their 2020 and beyond perceptions and intentions around AI. In our AI in Enterprise survey, we found that the top three adoption challenges were: costs, skills, and time to value.
So, what do all of these surveys show? They show that AI adoption continues to be top of mind for enterprises and that challenges remain. And in fact, some analysts believe that the economic realities of a post-COVID-19 world will likely spur governments and enterprises to deepen their adoption of AI.
The process of building and deploying AI solutions in cloud, multi-cloud, or edge computing environments can be daunting. So can operationalizing these AI solutions as they move from pilot to production at scale. And doing this quickly and consistently can be even more challenging. Each use case has its own specific AI applications (e.g. image recognition vs. speech analytics), solution components, and partner ecosystem with different technology, data, security, and operational requirements.
We’ve made a number of additions to our portfolio, all designed to reduce the time it takes for you to move proof-of-concepts into production.
The availability of the HPE container platform for enterprise AI at scale
This past March, we announced the general availability of HPE’s Container Platform, the industry's first enterprise-grade container platform designed for both cloud-native applications and non-cloud-native monolithic applications using pure open source Kubernetes—running on bare-metal servers or virtual machines (VMs), in the data center, on any public cloud, or at the edge.
The HPE Container Platform provides significant advantages for running containers on bare metal, ensuring optimal performance for AI, machine learning (ML), and deep learning (DL) applications. It also reduces overall TCO. And for GPU-accelerated workloads, customers can improve GPU utilization and efficiency with our GPU-as-a-Service solution. Now you can develop and deploy your AI workloads on containers to dramatically reduce cost and complexity, while exploiting the full value and performance of GPUs to deliver the fastest results and highest throughput.
Additionally, as enterprises move beyond experimentation to operationalize their ML models, the HPE ML Ops software solution can address the last mile problems related to model deployment and management. HPE ML Ops is a solution based on the HPE Container Platform, and it can bring DevOps-like speed and agility to the end-to-end ML lifecycle—across data preparation, model building, model training, model deployment, collaboration, and monitoring.
Using the HPE Container Platform, AI and ML applications can be developed and deployed on bare-metal systems, exploiting the performance of GPUs.
Along with the general availability announcement for the HPE Container Platform, HPE introduced new reference configurations with best-practice blueprints for workload-optimized configurations on HPE infrastructure including the HPE Container Platform running on HPE Apollo servers (HPE Apollo 6500 for Compute).
Broader support for NVIDIA GPU Cloud (NGC) with new certifications and services
Our partner NVIDIA has become an acknowledged leader in the advancement of AI technologies, specifically ML and DL, through its rapidly expanding ecosystem of hardware, software, and services. Thanks to the RAPIDS Open GPU data science community, there now exists a rich set of APIs and libraries for high-performance data science on GPU-accelerated systems. Likewise, the NVIDIA GPU Cloud (NGC) software hub and service offerings are simplifying access to and use of container-based instances of AI and high performance computing (HPC) software.
To help you rapidly take advantage of NGC on HPE GPU-accelerated servers, HPE and NVIDIA have extended NGC support to more HPE server platforms. HPE now offers a full range of edge-to-core servers using NVIDIA GPU accelerators and certified NGC-Ready. NGC is now supported on Apollo 2000/6500 Gen10 servers, ProLiant DL380 servers, and on Edgeline EL4000/EL8000 servers. And to help our customers get up and running faster, we’ve introduced new HPE Pointnext deployment and integration services for NGC.
NGC containers can also run on our HPE Container Platform, as outlined in this new step-by-step tutorial. Using the HPE Container Platform, you can now quickly deploy an unmodified NGC container image (e.g. TensorFlow) on Kubernetes—up and running for your data science teams within minutes, with easy access to GPU acceleration and the data sources they need.
Our new HPE Pointnext services coordinate the installation, configuration, and validation of NGC-Ready HPE infrastructure. With these new services, AI and ML teams can augment IT teams to reduce the time required to obtain production status. Systems certified for NVIDIA NGC become fully commissioned and operational quickly, allowing a quicker return on investment.
Easier access to AI Ready platforms
For customers in the U.S., Canada, United Kingdom, Ireland, and Germany, we’ve also created AI Ready configurations to make it easier for HPE customers to choose GPU-accelerated systems for AI and ML. They come preconfigured to meet the needs of IT organizations supporting early-stage AI proof-of-concept (PoC) work, doing heavy-duty model training, as well as inferencing in the datacenter or deploying inferencing applications at the edge of a network.
Each configuration is ready to be used with NGC containers, allowing data scientists, developers, and researchers to focus on building solutions, gathering insights, and delivering business value. While targeted at AI and ML, these configurations are also robust platforms for use with the NVIDIA vCompute Server, where virtualization rather than containerization is the workload management choice. Check with your local HPE sales representative or authorized partner for availability in your country.
Hewlett Packard Enterprise