Servers & Systems: The Right Compute
1752781 Members
6278 Online
108789 Solutions
New Article
AdvEXperts

The next wave of AI won’t happen without supercomputing

The first wave of AI growth—characterized by small-scale proof-of-concept implementations—is giving way to new cycles of larger-scale deployments, evolved application sets, and use of AI techniques in production to drive real business decisions. But this next wave can’t happen without supercomputing.

AI_blog_img_800x533.png

Adoption of artificial intelligence (AI) has exploded in the last few years, with virtually every kind of enterprise rushing to integrate and deploy AI methodologies in their core business practices.

The first wave of fast AI growth—characterized by small-scale proof-of-concepts and one-off machine or deep learning implementations—is beginning to give way to a new cycle of AI adoption. In this next wave, we’ll see larger-scale deployments, a more evolved set of applications, and a concerted effort to apply AI techniques in production to solve real business problems and drive real business decisions.

But the next wave of AI growth won’t—actually, can’t—happen without supercomputing.

Artificial intelligence is a supercomputing problem. The digital universe is doubling in size every two years, headed for 175 zettabytes by 2025, and AI applications thrive on massive datasets. There’s also a great convergence occurring between AI and simulation. Organizations that perform simulation are increasingly adding machine or deep learning to their workflows, and vice versa. In fact, these two methodologies are becoming so entangled we’ve even seen the emergence of a completely new technique, cognitive simulation, where large-scale simulations get a lot smarter with embedded machine learning algorithms. Overall, workflows are becoming increasingly heterogeneous, blending data analytics, machine or deep learning, and AI with traditional HPC simulation.

Add to this the growing interest in distributed machine and deep learning methods, or training algorithms in less time by parallelizing training computation across multiple machines. To illustrate just how much computational power training needs, consider this example: a recent research report from Digital Catapult found that training a deep neural network on a dataset of 1.28 million images would take an approximate minimum computation requirement of an exaflop.

Processing and analyzing ever-growing volumes of data, supporting heterogeneous workloads, and enabling distributed training methods all require increasingly powerful and capable computing architectures.

Supercomputers are tightly integrated, highly scalable, zero-waste architectures that offer the right technology for each individual task to enable maximum application efficiency and eliminate computational bottlenecks. They excel at ingesting and moving massive volumes of data. Some supercomputers available today have embraced heterogeneity in the face of converging workloads, and now they’re being purpose-built so that everything from the processors to the software ecosystem is geared to allow diverse workflows to run on a single system simultaneously. Lastly, the compute power offered by supercomputers makes it possible to train larger neural networks using bigger training sets in shorter periods of time.

It boils down to this: Supercomputers are the only machines that offer the tools and technologies that organizations will inevitably need as they embrace the next wave of AI growth.

Today, we see some of our most progressive customers in a diverse range of industries using Cray supercomputers for their AI problems—from automotive and autonomous vehicles, aerospace, pharmaceuticals, and healthcare to insurance, weather, defense, and oil and gas.

For example, PGS, a seismic exploration company, recently applied machine learning optimization techniques to determine the velocity model in a full waveform inversion seismic imaging workload. PGS’s Abel supercomputer, a Cray® XC40™ system, used a very simple initial model to learn how to best steer refracted and diving waves for deep model updates and reproduce the sharp salt boundaries typical in the Gulf of Mexico. Their success story demonstrates that supercomputers, which in the oil and gas space are primarily used for traditional modeling and simulation, are equally well-positioned to take on extremely large machine learning problems.

In general, we are moving away from viewing AI as specific methodologies that can be folded into business operations and moving toward approaching AI as an integrated workflow critical to the business. The next wave of AI growth fully taking off will be dependent on whether computing infrastructures are powerful enough to support the growing size and complexity of AI use cases.

As this next wave takes hold, we’re excited to watch organizations use AI to refine the ability to move massive amounts of data within and between applications, adapt to changing market conditions, and scale-up in accordance with data growth. These are the businesses that will succeed with AI. But they won’t get there without supercomputers.

This blog originally published on cray.com and has been updated and published here on HPE’s Advantage EX blog.



Advantage EX Experts
Hewlett Packard Enterprise

twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/info/hpc

0 Kudos
About the Author

AdvEXperts

Our team of Hewlett Packard Enterprise Advantage EX experts helps you dive deep into high performance computing and supercomputing topics.