- Community Home
- >
- Partner Solutions and Certifications
- >
- Alliances
- >
- Solving the most complex challenges of managing AI...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Solving the most complex challenges of managing AI applications with HPE and NVIDIA
Ka Wai Leung—HPE GreenLake Alliance Manager, Hewlett Packard Enterprise
Priya Tikoo—Senior Technical Product Manager, NVIDIA
Every business function can be infused with artificial intelligence (AI) to boost productivity—from smart factories in manufacturing, to recommendation engines in retail, to fraud detection in financial services. IDC projects that 60% of the Global 2000 will have AI in production by 2024 and will use AI / machine learning (ML) across all business-critical horizontal functions. Although there are several benefits waiting to be realized with AI for most enterprises and customers, building an AI-enabled application and then taking it from the prototype to production is still extremely difficult.
At NVIDIA GTC, a global AI conference running online March 20–23, Hewlett Packard Enterprise with NVIDIA will present a session on solving the most complex challenges of building, deploying, and managing AI applications. Here’s a sneak peek of what you can expect.
Top challenges of building an end-to-end AI platform
As state-of-the-art AI models continue to rapidly evolve and expand in size, complexity, and diversity, an AI platform with the ability to support diverse AI model architectures is critical. HPE and NVIDIA will address the common barriers related to performance, scalability, and production-level deployment, as well as best practices for leveraging a versatile AI platform to build enterprise AI applications.
An operating system for enterprise AI
NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software that serves as the operating system that enables your infrastructure to be AI-ready. The software suite accelerates data science pipelines and streamlines the development and deployment of predictive AI models to automate essential processes and rapidly gain insights from data. This session will include a deep dive on NVIDIA AI Enterprise and its integration with HPE platforms.
An AI platform from HPE and NVIDIA
HPE has collaborated with NVIDIA to deliver AI/ML solutions. This collaboration includes a solution that is based on a validated technology stack including NVIDIA AI Enterprise, HPE GreenLake, HPE Ezmeral software, HPE ProLiant servers with NVIDIA A100 Tensor Core GPUs, RHEL OS, and VMware vSphere.
*NVIDIA provided graphic
For customers who have adopted a hybrid cloud strategy to meet specific requirements for AI workloads, HPE GreenLake deployment is highly suitable because of latency and performance sensitivity, strong data governance and data gravity requirements, and maximized GPU cloud consumption. Private cloud and on-premises setups should be used during the model building phase, when ML and deep learning models are trained before going into production. This can require compute- and GPU-intensive processing, tuning, and testing of large numbers of parameters or combinations of different model types and inputs using terabytes or petabytes of data. Performing this training on a public cloud can consume very expensive GPU and data ingress/egress resources.
HPE GreenLake is the HPE private cloud for our customers. It is the cloud that comes to you and deploys where your data lives. HPE GreenLake allows customers to deploy NVIDIA-accelerated AI/ML workloads on-premises using an infrastructure-as-a-service approach to take advantage of cloud-like experiences such as scaling on demand, management through a single portal, rapid infrastructure deployment, and a cost-effective OPEX model.
An entry / proof of concept or high-availability, production-based solution with NVIDIA AI Enterprise can be set up on HPE GreenLake based on the example from this on-demand session at NVIDIA GTC. HPE Ezmeral software provides the core Kubernetes and data management platforms. Customers who haven’t standardized on a Kubernetes platform can leverage HPE Ezmeral software to host the NVIDIA AI Enterprise containers. HPE Ezmeral software is based on standard CNCF K8s technology and features strong multitenant, security, access control, and monitoring capabilities.
Data is the foundation for AI/ML workloads. Production-grade AI deployments require strong data management, protection, governance, and analysis—a pipeline to ingest, process, store, access, analyze, and present huge volumes of data securely. Our solution is HPE Ezmeral Data Fabric, a data management solution for NVIDIA AI Enterprise workloads. HPE Ezmeral Data Fabric is a hybrid data analytics solution optimized for hybrid data analytics. The native data plane combines files, objects, tables, and streaming data to provide at-a-glance visibility and direct data access no matter where it is located. Designed to be target-agnostic, data that was written in one protocol can be read by data scientists, developers, and IT in another. HPE Ezmeral Data Fabric has been validated with NVIDIA GPU Direct Storage technology to further enhance data I/O throughput between GPU and the storage tier.
Join this GTC session to learn more about the NVIDIA and HPE solution, AI infrastructure recommendations for on-premises and private cloud, and how to accelerate building and deployment of highly scalable AI workloads using a full suite of software, hardware, and cloud technology.
Join HPE, along with other AI developers and innovators, at GTC, March 20–23, 2023. Register free for NVIDIA GTC today.
Check out all HPE sessions at GTC:
- S52292—HPE Machine Learning Development Environment and the Open Source ML Advantage (SimuLIVE)
- S52331—Solving the Most Complex Challenges of Building, Deploying, and Managing AI Applications with HPE and NVIDIA (On Demand)
- SE51125—Hybrid-Cloud Speech AI Solution (Special Event Session)
For more information about HPE and NVIDIA solutions for AI, please visit this page on our collaboration.
- Back to Blog
- Newer Article
- Older Article
- JoeV_The_CT on: Streamline AI Workloads with HPE & NVIDIA
- iVAN LINARES on: Curious about Windows Server 2022 downgrade rights...
- HPEML350_22 on: Windows Server 2022 is here: how to implement it o...
- testingis on: How are you going to license that new server? A st...
- wowu on: Pick up the pace
- nice345 on: Don’t let the time slip away
- vmigliacc on: Frequently asked questions about HPE solutions for...
- MassimilianoG on: What are downgrade and Down-edition rights for Win...
- harithachinni on: Coffee Coaching's "Must See" Discover Virtual Expe...
- FannyO on: TOP 10 Reasons for choosing HPE for SAP HANA
-
Accenture
1 -
Citrix
13 -
Coffee Coaching
345 -
Event
66 -
Microsoft
192 -
Red Hat
7 -
SAP
39 -
Strategic Alliances
86 -
Veeam
8 -
VMware
33