Alliances
1820257 Members
2635 Online
109622 Solutions
New Article
HPE_Alliances

Solving the most complex challenges of managing AI applications with HPE and NVIDIA

AdobeStock-57301038_800_0_72_RGB.jpg

Ka Wai Leung—HPE GreenLake Alliance Manager, Hewlett Packard Enterprise
Priya Tikoo—Senior Technical Product Manager, NVIDIA

Every business function can be infused with artificial intelligence (AI) to boost productivity—from smart factories in manufacturing, to recommendation engines in retail, to fraud detection in financial services. IDC projects that 60% of the Global 2000 will have AI in production by 2024 and will use AI / machine learning (ML) across all business-critical horizontal functions. Although there are several benefits waiting to be realized with AI for most enterprises and customers, building an AI-enabled application and then taking it from the prototype to production is still extremely difficult.

At NVIDIA GTC, a global AI conference running online March 20–23, Hewlett Packard Enterprise with NVIDIA will present a session on solving the most complex challenges of building, deploying, and managing AI applications. Here’s a sneak peek of what you can expect.

Top challenges of building an end-to-end AI platform

As state-of-the-art AI models continue to rapidly evolve and expand in size, complexity, and diversity, an AI platform with the ability to support diverse AI model architectures is critical. HPE and NVIDIA will address the common barriers related to performance, scalability, and production-level deployment, as well as best practices for leveraging a versatile AI platform to build enterprise AI applications.

An operating system for enterprise AI

NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software that serves as the operating system that enables your infrastructure to be AI-ready. The software suite accelerates data science pipelines and streamlines the development and deployment of predictive AI models to automate essential processes and rapidly gain insights from data. This session will include a deep dive on NVIDIA AI Enterprise and its integration with HPE platforms.

An AI platform from HPE and NVIDIA

HPE has collaborated with NVIDIA to deliver AI/ML solutions. This collaboration includes a solution that is based on a validated technology stack including NVIDIA AI Enterprise, HPE GreenLake, HPE Ezmeral software, HPE ProLiant servers with NVIDIA A100 Tensor Core GPUs, RHEL OS, and VMware vSphere.

Picture1.png

*NVIDIA provided graphic

For customers who have adopted a hybrid cloud strategy to meet specific requirements for AI workloads, HPE GreenLake deployment is highly suitable because of latency and performance sensitivity, strong data governance and data gravity requirements, and maximized GPU cloud consumption. Private cloud and on-premises setups should be used during the model building phase, when ML and deep learning models are trained before going into production. This can require compute- and GPU-intensive processing, tuning, and testing of large numbers of parameters or combinations of different model types and inputs using terabytes or petabytes of data. Performing this training on a public cloud can consume very expensive GPU and data ingress/egress resources.

HPE GreenLake is the HPE private cloud for our customers. It is the cloud that comes to you and deploys where your data lives. HPE GreenLake allows customers to deploy NVIDIA-accelerated AI/ML workloads on-premises using an infrastructure-as-a-service approach to take advantage of cloud-like experiences such as scaling on demand, management through a single portal, rapid infrastructure deployment, and a cost-effective OPEX model.

An entry / proof of concept or high-availability, production-based solution with NVIDIA AI Enterprise can be set up on HPE GreenLake based on the example from this on-demand session at NVIDIA GTC. HPE Ezmeral software provides the core Kubernetes and data management platforms. Customers who haven’t standardized on a Kubernetes platform can leverage HPE Ezmeral software to host the NVIDIA AI Enterprise containers. HPE Ezmeral software is based on standard CNCF K8s technology and features strong multitenant, security, access control, and monitoring capabilities.

Data is the foundation for AI/ML workloads. Production-grade AI deployments require strong data management, protection, governance, and analysis—a pipeline to ingest, process, store, access, analyze, and present huge volumes of data securely. Our solution is HPE Ezmeral Data Fabric, a data management solution for NVIDIA AI Enterprise workloads. HPE Ezmeral Data Fabric is a hybrid data analytics solution optimized for hybrid data analytics. The native data plane combines files, objects, tables, and streaming data to provide at-a-glance visibility and direct data access no matter where it is located. Designed to be target-agnostic, data that was written in one protocol can be read by data scientists, developers, and IT in another. HPE Ezmeral Data Fabric has been validated with NVIDIA GPU Direct Storage technology to further enhance data I/O throughput between GPU and the storage tier.

Join this GTC session to learn more about the NVIDIA and HPE solution, AI infrastructure recommendations for on-premises and private cloud, and how to accelerate building and deployment of highly scalable AI workloads using a full suite of software, hardware, and cloud technology.

Join HPE, along with other AI developers and innovators, at GTC, March 20–23, 2023. Register free for NVIDIA GTC today.

Check out all HPE sessions at GTC:

 
For more information about HPE and NVIDIA solutions for AI, please visit this page on our collaboration.

0 Kudos
About the Author

HPE_Alliances

HPE Alliance Partners