Servers: The Right Compute
cancel
Showing results for 
Search instead for 
Did you mean: 

A New Frontier of AI and Deep Learning Capabilities

BillMannel

 

Powerful and cost-effective HPC platforms promote data fusion, reduce training time, and enable ultra-scale real-time data analytics to power deep learning systems.

In today’s digital climate, organizations of every size and industry are both collecting and generating enormous amounts of data that can potentially be used to solve the world’s greatest problems—from national security and fraud detection to scientific breakthroughs and technological advancement. However, traditional analysis techniques and practices are not capable of rapidly delivering automated, real-time insights from the rising data volumes to the point that artificial intelligence (AI) is becoming vital to harnessing the full understanding of scientific and business data.

But is traditional AI enough?

The evolution of Big Data is driving a major paradigm shift in the field of AI, which is increasing the need for high performance computing (HPC) technologies that can support high performance data analytics (HPDA). According to an IDC report, the HPDA server market is projected to grow at a 26% CAGR through 2020, including an additional $3.9 billion in revenue by 2018.

Thanks to robust HPC systems, compute capacity and data handling are powerful and affordable enough that many organizations are beginning to invest in a new frontier of AI and deep learning capabilities. HPC solutions coupled with advanced data infrastructures are replacing the need for costly and time-consuming manual calculations, laying the groundwork for the next generation of AI that can rapidly automate and accelerate data analysis.

 Deep learning (training and inference modeling) is a form of AI-based analytics that leverages pattern-matchinAI_DL-GTCBlog.jpgg techniques to analyze vast quantities of unsupervised data. Much like the neural pathways of the human brain, networks of hardware and software utilize training, generic code, and pattern recognition to analyze video, text, image, and audio files in real-time. Deep learning systems then observe, test, and refine information from core data centers to the intelligent edge, converging datasets into concise, actionable insight. The problem is, learning takes time.

Dr. Goh of HPE offers this example in his interview on the trends of Big Data and deep learning: Google conducted an experiment to build a large-scale deep learning software system using cat videos. They began by taking millions of pictures of cats and breaking them down into hierarchical inputs (i.e. a pixel of fur, a whisker, or paws). Using complex machine learning algorithms, the AI machines analyzed multiple layers of inputs over the course of days, weeks, and even months, until they could effectively make decisions on their own.

For today’s developers, the objective is to enhance deep learning capabilities in order to extract insight as quickly and accurately possible. Dr. Goh explains, “Enterprises want to learn fast. If you don’t want to take weeks or months to do learning because of the massive amount of data you have to ingest, you must scale your machine. This is where we come in. You have to scale the machine because you can’t scale humans.”

Some HPC systems are making huge strides in deep learning capabilities. Libratus, an AI powered by the Pittsburgh Supercomputing Center’s Bridges computer, recently took on four professional poker players in a “Brains vs. AI” competition. Across 20 days, the machine utilized strategic reasoning to perform risk assessments, empower lightning fast data analytics, and optimize its decision-making processes. At the end of the project, Libratus bested its human opponents by more than $1.7 million, and each human finished with a negative number of chips.

Deep learning systems require an order of magnitude increase in floating point performance compared to traditional HPC, and delivering ever-increasing GPU capacity is critical for the massively parallel processing performance and scalability necessary for success. HPE’s deep learning platforms feature NVIDIA Tesla GPUs which are well-suited for deep learning due to their high single and floating point performance. This is critical for deep neural network performance, particularly for training modules. Leveraging deep neural networks and cost-effective compute platforms for inference helps to promote data fusion, reduces training time, and enables ultra-scale real-time data analytics. Investing in a powerful deep learning infrastructure is key to improving time-to-insight and accelerating discovery across multiple sectors, including technology, life sciences, economics, government, and more.

Don’t miss a second of the AI revolution—check out the GPU Technology Conference (GTC) happening May 8-11 in Silicon Valley, and follow us on Twitter at @HPE_HPC for the latest news and updates.


Bill Mannel
Vice President and GM HPE Servers
Hewlett Packard Enterprise

twitter.gif @Bill_Mannel
linkedin.gif Bill-Mannel

 

What’s the future of virtualization?
Get inspired at Enterprise.nxt.
> Go now

Bill Mannel
Vice President and GM HPE Servers

twitter.gif@Bill_Mannel
linkedin.gifBill-Mannel

0 Kudos
About the Author

BillMannel

As the Vice President and General Manager of HPC and AI Segment Solutions in the Data Center Infrastructure Group, I lead worldwide business and portfolio strategy and execution for the fastest growing market segments in HPE’s Data Center Infrastructure Group which includes the recent SGI acquisition and the HPE Apollo portfolio.

Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
HPE at Worldwide IT Conferences and Events -  2017
Learn about IT conferences and events  where Hewlett Packard Enterprise has a presence
Read more
View all