Servers & Systems: The Right Compute
1753894 Members
7465 Online
108809 Solutions
New Article ๎ฅ‚
AdvEXperts

Accelerating AI innovation with high performance computing from HPE and Intel

From the manufacturing floor to the life sciences research lab, AI and HPC are providing new ways to tackle our toughest challenges. Hereโ€™s how the newest technologies from Intel and HPE are powering the AI revolution.

By Advantage EX guest blogger Nash Palaniswamy, Vice President & General Manager AI, HPC, and Data Center Accelerator Solutions & Sales, Intel Corporation

HPE-Intel-HPC-AI.pngAs organizations seek to unleash the full potential of data, artificial intelligence (AI) has become a business and scientific imperativeโ€”and high performance computing (HPC) a critical enabler. HPC platforms provide the performance and scale to train deep learning models and perform inference for operationalizing those models. HPC applications increasingly incorporate AI into their modeling and simulation programs, using deep learning to identify patterns in multidimensional data sets and gain data-driven insights that optimize their traditional methods.

Intel and HPE have a 40-year record of collaborating to envision, engineer, and apply innovative products. In this blog, Iโ€™d like to show some of the ways our companies are working together to facilitate the convergence of AI and HPC, and provide our customers with a practical, high-performance foundation for AI innovation.

Processors built for AI and HPC

Intelโ€™s innovation for AI and HPC starts with the 3rd Gen Intelยฎ Xeonยฎ Scalable processor, the first Intel Xeon Scalable processor to take advantage of our 10nm processor technology. We designed the processor to handle the demands of converged workloads, so it offers as many as 40 cores per processor rather than the prior generationโ€™s 28. Equally important for data-intensive computing, it provides increased memory bandwidthโ€”eight channels rather than six, running at faster speeds.

The Intel Xeon Scalable processor line is also the only mainstream CPU family with built-in acceleration for AIโ€”Intelยฎ Deep Learning Boost (Intel DL Boost). In the latest Intel DL Boost version, we have added support for a new instruction, Intelยฎ Vector Neural Network Instruction (Intel VNNI), which combines three instructions into one. This helps maximize the use of compute resources, improve cache utilization, and avoid potential bandwidth bottlenecks in deep learning workloads. In addition, Intelยฎ AVX-512 acceleration increases performance for diverse workloads.

These enhancements produce dramatic performance improvements for a variety of AI and HPC workloads. One of my favorite examples showed that customers who use Intel optimizations for TensorFlow and Intel DL Boost with the latest 3rd Gen Intel Xeon Scalable processors can expect more than 11x higher batch inference performance for convolutional neural net-based image classification, as measured by ResNet50, compared to a standard Cascade Lake FP32 configuration.[1] Intel DL Boostโ€™s performance increases are particularly exciting for customers working in computer vision, and have shown improved results for everything from cloud-based videoconferencing to diagnostics imaging. Organizations have also used DL Boost to reduce the amount of computing resources needed for Monte Carlo simulation, which are widely used in financial, scientific, and engineering applications. 

Beyond the processor

HPE adopts many Intel technologies to solve problems and provide new capabilities for its customers. For example, HPEโ€™s Apolloโ„ข 2000 Gen10 Plus System uses the newest 3rd Gen Intel Xeon Scalable processors in high-performance infrastructure that lets customers simplify their move into AI and keep AI affordable by using the only AI-enabled CPU and the same familiar, Intel technology-based infrastructure that drives their other workloads. In addition to one- and two-socket 3rd Gen Intel Xeon Scalable processors in the Apollo 2000 Gen10 Plus systems, HPE uses four- and eight-socket 3rd Gen Intel Xeon Scalable processors for larger platforms such as its Superdomeโ„ข Flex 280 system.

HPE uses the hardware-enabled security capabilities of Intelยฎ Software Guard Extensions (Intel SGX) to implement Confidential Computing. This solution helps protect data when itโ€™s in use, an important requirement for AI and analytics programs that seek to share data among multiple parties while protecting user privacy and meeting privacy laws.

The Intelยฎ oneAPI HPC toolkit  and Intel oneAPI AI Analytics toolkit are standards-based solutions that provides compilers, performance libraries, and parallel models, to help AI innovators build, test, and optimize applications for diverse HPC and AI architectures. Using Intel oneAPI, developers can write code once and run it on CPUs, GPUs, FPGAs, and other accelerators. HPE offers Intel oneAPI as a complimentary download and sells priority support for the solution.

HPE also supports Distributed Asynchronous Object Storage (DAOS), Intelโ€™s open source software-defined object store designed for workflows that combine simulation, AI, and data analytics.   

Leadershp for exascale                                                                               

Intel and HPE are also leaders in the move to exascale computing. Partnering with the US Department of Energyโ€™s Argonne National Laboratory, weโ€™re creating Aurora, an exascale-capable supercomputer that will dramatically advance the ability to integrate, AI, data analytics, modeling, and simulation. The machine is scheduled to arrive in 2022 and will give scientists unprecedented abilities to tackle complex challenges in new ways. The machine is considered to be of strategic importance, crucial to the nationโ€™s economic productivity and competitiveness. Many of its technological innovations will also make their way into enterprise systems and open source software, broadening the systemโ€™s economic impact.

The framework for the Aurora system is a next-generation HPE Cray EX supercomputer platform based on a future generation of Intel Xeon Scalable processors, accelerated by GPUs based on Intel Xe architecture. The backbone is the HPE Slingshotโ„ข  interconnect. 

Aurora will be highly optimized across multiple dimensions that are key to success in AI, simulation, and data applications. That includes DAOS to support new types of workloads, and Intel oneAPI to facilitate accessing the systemโ€™s CPU and GPU resources. HPE and Intel engineers are working closely to design, integrate, and validate the mammoth system and ensure quick productivity once it is installed. Aurora collaborators are also developing software solutions, such as fabric software, developer tools, and job control utilities, to optimize performance and throughput help users make the most of the systemโ€™s resources.

Speed and scale for innovation and discovery  

The collaboration between Intel and HPE means enterprises and research teams can accelerate AI innovation and run diverse high-performance workloads on proven platforms designed for the convergence of AI and HPC. Iโ€™m excited to continue Intelโ€™s collaboration with HPE, and eager to see how customers will use our technologies to create a new era of innovation and discovery in AI and HPC.

Learn more. Come see us at ISC.

To hear more about the Intel and HPE collaboration for HPC, visit the virtual Intel booth at ISC and catch a virtual Fireside Chat between Bill Mannel, HPEโ€™s VP and general manager for HPC, and Trish Damkroger, VP and general manager for HPC at Intel. Their chat is titled "A Look Inside the Powerful HPC Partnership between HPE and Intel and How Joint Solutions Overcome the Computing Challenge of Today's Digital World."

Also look for technical sessions covering a wide range of topics and technologies relating to HPC and AI. I hope to see you there!


Meet our guest blogger Nash Palaniswamy, Vice President & General Manager AI, HPC, and Data Center Accelerator Solutions & Sales, Intel Corporation

Nash_Palaniswamy.pngDr. Nash Palaniswamy is responsible for winning Intel AI, HPC, and Datacenter Accelerator solutions at our datacenter customers and delivering the overall AI and HPC revenue for the Intel Datacenter Group.  Previously, he was Strategic Lead for Intelยฎ QuickAssist Technology-based accelerators for servers, led the Throughput Computing Task Force which ushered the creation of a heterogeneous product roadmap, co-led the creation of the Early Ship Program for HPC and Cloud, co-led strategy for PCIe Gen3 and licensing of FSB and QPI to FPGA vendors, and represented Intel as Lead for the WWW Consortium Advisory Committee. Prior to Intel, he held several senior and executive roles, including being the Director of System Architecture at Conformative Systems, CTO/VP of Engineering at MSU Devices, and Director of Java Program Office and Wireless Software Strategy in the Digital Experience Group of Motorola, Inc. Nash also developed a virtual prototyping EDA SW (i.e., SimEZ) tool at Motorola which was sold to Cadence.

Dr. Palaniswamy holds four patents, has authored numerous technical publications, and was honored with an Intel Achievement Award for his contribution to the #1 system in the Top 500 list. He holds a B.S. in Electronics and Communications Engineering from Anna University (Chennai, India) and an M.S. and Ph.D. from the University of Cincinnati in Electrical and Computer Engineering.


Advantage EX Experts
Hewlett Packard Enterprise

twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/info/hpc


[1] 11x higher batch AI inference performance with Intel-optimized Tensor Flow vs. stock Cascade Lake FP32 configuration. See [118] at https://edc.intel.com/content/www/us/en/products/performance/benchmarks/3rd-generation-intel-xeon-scalable-processors/?r=621120647. Results may vary.

 

0 Kudos
About the Author

AdvEXperts

Our team of Hewlett Packard Enterprise Advantage EX experts helps you dive deep into high performance computing and supercomputing topics.