Servers: The Right Compute
cancel
Showing results for 
Search instead for 
Did you mean: 

Growing HPC and AI Convergence is Transforming Data Analytics

Vineeth_Ram

 

HPE and NVIDIA have joined forces to deliver cutting-edge deep learning solutions, converging HPC and AI to execute workloads with superhuman speed and precision. Learn what’s to come at HPC User Forum.

Data analytics and insights are fueling innovation across scientific research, product and service design, customer experience management, and process optimization. Real-time analytics with in-memory computing, big data analytics, insights from simulation and modeling fueled by high performance computing (HPC), and predictive analytics with artificial intelligence (AI) are core capabilities required by data-driven organizations looking to gain competitive advantage with their digital transformation initiatives.

HPC enables complex modeling and simulation to accelerate innovation in diverse areas—ranging from molecular chemistry to genome sequencing, energy exploration, and financial trading. AI isHPCForumBlog.jpg the foundation for cognitive computing, an approach that enables machines to mimic the neural pathways of the human brain to analyze vast datasets, make decisions in real time, and even predict future outcomes. In order to succeed, all organizations need advanced technology solutions to support their HPC, Accelerated Analytics, and AI applications to execute increasingly difficult tasks and forecast evolving trends, equipping them to solve some of the world’s biggest scientific, engineering, and technological problems.

HPC is a driving force behind business growth and innovation, empowering users to execute compute- and data-intensive workloads quickly and accurately. Purpose-built HPC solutions are accelerating performance and increasing operational efficiencies like never before, which allows organizations to continuously scale and adopt cutting-edge tools in order to take on the next great challenge. It is also being recognized that HPC environments could be a good foundation for AI and deep learning, as they provide the extreme levels of scalability, performance, and efficiency required by these complex applications. With compute innovations such as these, IT departments can confidently implement and accelerate both HPC and AI applications and put their explosive data volumes to work.

ACCELERATING DEEP INSIGHTS

HPC and deep learning are starting to converge as organizations seek a comprehensive infrastructure solution to address evolving industry demands. Deep learning, a form of AI, uses logic-based models to complete supervised or unsupervised tasks (that is, reaching a specific target with a corresponding input or learning from a varying set of inputs). This technique utilizes training algorithms and pattern recognition to process video, text, image, and audio files—and it requires HPC levels of performance and efficiency to do it. Deep learning systems then observe, test, and refine insight into actionable intelligence. According to a research report by Markets and Markets, the deep learning sector is expected to reach $1,772.9 million by 2022, rising at a CAGR of 65.2% as enterprises invest heavily in AI capabilities.

Hewlett Packard Enterprise (HPE) is helping organizations make the most of their data with AI-driven analytics, providing the optimal infrastructure platforms designed to harness deep insights with superhuman speed and precision. HPE has developed purpose-built HPC platforms that are designed to scale to support a variety of complex workloads. And with an expanded partner ecosystem, we are collaborating with industry experts like NVIDIA to bring deep learning capabilities from the core data center to the intelligent edge for all organizations. NVIDIA delivers the best-in-class GPU acceleration optimized for deep learning and accelerated analytics applications to rapidly and efficiently process massive data volumes. These solutions deliver the ultimate performance for deep learning, analytics, and the highest versatility for all workloads, equipping organizations to operate as quickly and intelligently as possible.

One stellar example of this collaboration is a new supercomputer at the Tokyo Institute of Technology (TITech). Based on the HPE SGI 8600 and the NVIDIA Tesla P100, TSUBAME 3.0 is a converged HPC and deep learning platform that utilizes GPU accelerators to achieve optimal performance, efficiency, and accuracy. Satoshi Matsuoka, Professor and TSUBAME Leader, reports that TITech’s relationship with HPE will fuel a number of critical research projects and future workloads in HPC and deep learning, including the pursuit of the first exascale system.

DRIVING INDUSTRY INNOVATION FOR THE FUTURE

To promote further innovation and partnership in the HPC community, Hyperion Research launched the HPC User Forum, a unique market intelligence service that brings together leaders from government, industry, and academic organizations around the globe to discuss the latest developments in HPC. Recently, Hyperion has broadened this event beyond classic HPC, adding a major AI component. Now, users can explore in-depth the convergence of HPC and AI-driven analytics to learn about how HPC is promoting a new era of insight.

This week, the HPC User Forum returns to Milwaukee, Wisconsin on September 5th–7th, where technology experts will meet with HPC users to discuss next-generation IT solutions. At 9:45 a.m. on September 7th, I will present the HPE Vendor Technology Update, discussing the exciting developments on our journey to next-generation HPC and AI innovation. Attendees will have the opportunity to learn about the newly announced HPE Apollo and HPE SGI portfolio as well as HPE’s efforts to simplify deep learning and analytics for all organizations, accelerate the mission to Mars, make exascale computing, and much more.

Then at 3:45 p.m. on September 7th, HPE joins a session geared toward machine learning, deep learning, and early AI. Natalia Vassilieva of Hewlett Packard Labs will present HPE’s new Deep Learning Cookbook in her talk “Characterization and Benchmarking of Deep Learning.” The Deep Learning Cookbook is based on a massive collection of performance results for deep learning workloads using different hardware and software. This guide is designed to help customers streamline deep learning adoption for real-world applications.

The goal for HPE and NVIDIA is to help customers find new ways to harness massive amounts of data with the most powerful solutions for deep learning. As the demand for deep insight increases, we will strive to deliver dense and highly scalable solutions to accommodate these workloads, and explore the convergence of HPC and AI workloads onto a single set of infrastructure. To learn more about HPC and AI innovation, I invite you to follow me on Twitter at @VineethRam. And for more on how AI-based insights are expanding the scope of human knowledge, visit @HPE_HPC and @NvidiaAI.

Featured articles:


Vineeth Ram
Vice President of HPC & AI Portfolio Marketing
Hewlett Packard Enterprise

twitter.gif @VineethRam
linkedin.gif vineeth-ram-96b6b54

0 Kudos
About the Author

Vineeth_Ram

As the Vice President of HPC and AI Portfolio Marketing in the DCIG Portfolio Marketing team of HPE, I lead worldwide marketing and go-to-market planning and execution for the HPC and AI segments including the recent SGI acquisition and the HPE Apollo portfolio.

Events
June 18 - 20
Las Vegas, NV
HPE Discover 2019 Las Vegas
Learn about all things Discover 2019 in  Las Vegas, Nevada, June 18-20, 2019
Read more
Read for dates
HPE at 2019 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2019.
Read more
View all