Advantage EX
cancel
Showing results for 
Search instead for 
Did you mean: 

Exascale supercomputers signal a new era of discovery

The announcement of Frontier, the U.S. Department of Energy’s exascale supercomputer, signals the beginning of a new era of computing. New capabilities for a new set of workloads are coming together to create the next major inflection point for  the Exascale Era.

Supercopmuting_New_Era_RGB800x533.png

The road to exascale computing started out as a journey. Now, we find ourselves at the beginning of a new era.

Recently, Cray and the U.S. Department of Energy (DOE) announced Frontier―an exascale supercomputer being developed for Oak Ridge National Laboratory (ORNL). Slated for delivery in 2021, Frontier is expected to be the world’s most powerful computer. It will advance science and innovation far beyond anything currently possible.

For us at Cray, the announcement of Frontier is a thrilling―and humbling―moment. The system is the third contract win for our new Shasta™ architecture and Cray Slingshot™ interconnect. It validates our belief in and commitment to our Shasta technology. But more importantly, Frontier underscores why our company exists. Supercomputers in and of themselves don’t change the world. But put in the hands of scientists they can and do.

Building the tools that free institutions and individuals to solve problems that affect the health, safety, security, and longevity of our world is an honor―and has defined Cray since the beginning.

But this time it’s different. With the crossing of the exascale threshold, we’re entering a new era of computing. Why? Because the questions have changed, the workloads have changed, and the kinds of organizations doing the asking have changed. How we compute must change, too.

What makes exascale an era

Exascale is more than a machine or a speed milestone. It’s about new capabilities for a new set of workloads coming together to create the next major inflection point for computing.

We can attribute this transformation to several fundamental shifts.

First, macro trends in research and enterprise are driving a shift to data-intensive computing. We’re seeing organizations of all sizes grappling with explosive data growth which presents a tremendous opportunity for those that can effectively harness it for new discovery, innovation, and insights. We’re seeing this demand in advanced research labs creating an explosion of HPC and AI workloads. And we’re also seeing integration of these same workloads into critical applications and processes as digital transformation becomes a business imperative.

In response to these trends, workloads are fusing. AI, analytics, IoT, simulation, and modeling are all converging into one business-critical workflow―a workflow that must operate at extreme scale and often in real-time.

This combination of data and compute intensive workloads ― and the flexibility to bring simulation, modeling, analytics, and AI workflows together―exceeds the capabilities of today’s datacenter infrastructure.

Together, these realities are forcing our industry to rethink compute, networking, software, and storage architectures to deliver an infrastructure ready for this new era.

With the demise of Moore’s Law, we can no longer apply a brute force approach or add hundreds to thousands of virtual servers on the cloud to solve the problem. We need a more intelligent and purposeful approach that brings the best of HPC and the cloud together to deliver a step function increase in capability.

This new capability will take research and enterprise computing beyond super and into the Exascale Era.

Characterizing exascale technology

Technologically, the Exascale Era can be characterized by these three things:

  1. Flexible compute architectures to support a variety of processor, accelerator, and datacenter types
  2. System software stacks that bring together the performance and scalability of HPC with the flexibility, modularity, and user productivity of the cloud in one system
  3. Interconnects that unite performance, intelligence, and interoperability

We saw signs of the coming changes and began developing Shasta and Slingshot more than five years ago in response.

Our Shasta compute architecture is an entirely new design, built from the ground up to address the needs of exascale-era workloads. It supports a diversity of processor technologies, supports the converged use of analytics, AI, modeling, and simulation workloads, eliminates the distinction between supercomputers and clusters, and fuses HPC and AI workflows with the productivity of the cloud.

The Slingshot interconnect is different than any interconnect or fabric we, or anyone else, has ever built. In addition to high speed and low latency, Slingshot incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility.

Exascale as an inflection point

In the buzz about ORNL’s Frontier supercomputer and its sister exascale system, Argonne National Laboratory’s Aurora supercomputer, a key point gets easily sidelined.

Exascale technology embraces a new set of capabilities that extends far beyond these landmark supercomputers and is a technology inflection point for every datacenter. Like other major technological turning points ― Unix to Linux, the cloud, big data, AI, and more ― exascale may get its start with a few early adopters. But as with these other technologies, exascale computing capabilities will become mainstream because the market demands it.

It will become mainstream because commercial and government institutions of all sizes are dealing with the same levels of scale and data that were once only the domain of research computing. By 2025, IDC says worldwide data will grow 61% to 175 zettabytes. Everyone, from small private companies to the largest government labs, will be seeking ways to turn this ocean of data into actionable insight using new combinations of modeling, simulation, analytics, big data, IoT, and AI.

It’s no longer business as usual. The new requirements created by digital transformation, explosive data growth, and converging workloads touch every organization. And every organization will need access to the next generation of compute, interconnect, software, and storage technologies with new exascale-driven capabilities to embrace this new era of possibility.

We designed the Shasta hardware and software architecture and Slingshot interconnect to drive this inflection point. Their underlying capabilities will accelerate transformation by removing barriers to new workflow creation ― and power discovery and innovation across every industry and field of inquiry for years to come.

So to the Exascale Era we say, let’s get started!


This blog originally published on cray.com and has been updated and published here on HPE’s Advantage EX blog.



Brandon Draeger
Hewlett Packard Enterprise

twitter.com/brandondraeger
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/info/hpc

0 Kudos
About the Author

Brandon_Draeger

Brandon leads the Compute Product Marketing teams for HPE and joined the company in January 2020 as part of the Cray acquisition. Prior to Cray, Brandon held leadership roles in engineering, product management, marketing, and strategy at Intel, Dell, and Symantec.

Events
Starting June 23
HPE Discover Virtual Experience
Joins us for HPE Discover Virtual Experience live and on-demand
Read more
Online Expert Days - 2020
Visit this forum and get the schedules for online Expert Days where you can talk to HPE product experts, R&D and support team members and get answers...
Read more
View all