Join HPE and WEKA at NVIDIA GTC21 to learn about next-generation IT data storage architecture for production-scale AI.
Adopting artificial intelligence (AI) and analytics is challenging, as these workloads have significantly greater data storage and compute needs than traditional applications. AI and analytics data pipelines are inherently different from those of traditional applications, with distinct storage requirements at each stage.
As shown here, each stage of an AI and analytics process has distinct storage requirements: data ingestion requires large capacity and fast write, supporting AI training on GPU-based servers requires high-throughput, and low latency, ETL (extract, transform, load) processes require mixed read/write handling, while inference requires low latency and high throughput. Moreover, the entire data pipeline must use a single namespace to avoid creating silos and make all data visible everywhere from edge to cloud.
This means that the new data pipelines must efficiently support different IO patterns, multiple parallel process execution, edge-to-cloud-to-core strategies, and datasets that grow in size and complexity. Cost is likely to be a key consideration because AI, ML, DL algorithms, and GPU-accelerated computing require huge capacity and high throughput. Indeed, training a complex neural network may require a petabyte of data, underscoring the need for the parallel processing provided by NVIDIA GPUs. The data platform should be cost-effective in meeting these requirements
GTC session (ID: S31953): HPE and WekaIO (Weka)
In this session, we'll discuss a real-world AI use case showing an ultra-high-performance and cost-effective storage solution for AI workloads combining Weka's high-throughput and low latency solution with modern object storage.
We'll review the Weka AI™ Reference Architecture with NVIDIA DGX A100 and HPE DL325 servers. You will also learn about HPE's latest solution for Weka, reviewing the solution architecture in detail so you can understand how HPE/Weka solutions have delivered up to 50x improvement in application run time to drive quicker time to insights. The session will also explain how this architecture could leverage emerging technologies like NVIDIA® GPUDirect® Storage, NVIDIA 200 Gb Ethernet and NVIDIA Mellanox® InfiniBand networking solutions, and object storage for key AI use cases, such as conversational artificial intelligence (AI) and deep learning (DL).
Learn more about next-generation IT data storage architecture for production-scale AI at GTC21
Join HPE at NVIDIA GTC for a transformative global event that brings together brilliant, creative minds looking to ignite ideas, build new skills, and forge new connections to take on our biggest challenges. It all comes together online April 12 – 16 and registration is free. For more information on registering for this and other GTC sessions with HPE, please visit hpe.com/events/gtc.