Servers & Systems: The Right Compute
1752794 Members
6413 Online
108789 Solutions
New Article ๎ฅ‚
Brandon_Draeger

Exascale supercomputers signal a new era of discovery

The announcement of Frontier, the U.S. Department of Energyโ€™s exascale supercomputer, signals the beginning of a new era of computing. New capabilities for a new set of workloads are coming together to create the next major inflection point for  the Exascale Era.

Supercopmuting_New_Era_RGB800x533.png

The road to exascale computing started out as a journey. Now, we find ourselves at the beginning of a new era.

Recently, Cray and the U.S. Department of Energy (DOE) announced Frontierโ€•an exascale supercomputer being developed for Oak Ridge National Laboratory (ORNL). Slated for delivery in 2021, Frontier is expected to be the worldโ€™s most powerful computer. It will advance science and innovation far beyond anything currently possible.

For us at Cray, the announcement of Frontier is a thrillingโ€•and humblingโ€•moment. The system is the third contract win for our new Shastaโ„ข architecture and Cray Slingshotโ„ข interconnect. It validates our belief in and commitment to our Shasta technology. But more importantly, Frontier underscores why our company exists. Supercomputers in and of themselves donโ€™t change the world. But put in the hands of scientists they can and do.

Building the tools that free institutions and individuals to solve problems that affect the health, safety, security, and longevity of our world is an honorโ€•and has defined Cray since the beginning.

But this time itโ€™s different. With the crossing of the exascale threshold, weโ€™re entering a new era of computing. Why? Because the questions have changed, the workloads have changed, and the kinds of organizations doing the asking have changed. How we compute must change, too.

What makes exascale an era

Exascale is more than a machine or a speed milestone. Itโ€™s about new capabilities for a new set of workloads coming together to create the next major inflection point for computing.

We can attribute this transformation to several fundamental shifts.

First, macro trends in research and enterprise are driving a shift to data-intensive computing. Weโ€™re seeing organizations of all sizes grappling with explosive data growth which presents a tremendous opportunity for those that can effectively harness it for new discovery, innovation, and insights. Weโ€™re seeing this demand in advanced research labs creating an explosion of HPC and AI workloads. And weโ€™re also seeing integration of these same workloads into critical applications and processes as digital transformation becomes a business imperative.

In response to these trends, workloads are fusing. AI, analytics, IoT, simulation, and modeling are all converging into one business-critical workflowโ€•a workflow that must operate at extreme scale and often in real-time.

This combination of data and compute intensive workloads โ€• and the flexibility to bring simulation, modeling, analytics, and AI workflows togetherโ€•exceeds the capabilities of todayโ€™s datacenter infrastructure.

Together, these realities are forcing our industry to rethink compute, networking, software, and storage architectures to deliver an infrastructure ready for this new era.

With the demise of Mooreโ€™s Law, we can no longer apply a brute force approach or add hundreds to thousands of virtual servers on the cloud to solve the problem. We need a more intelligent and purposeful approach that brings the best of HPC and the cloud together to deliver a step function increase in capability.

This new capability will take research and enterprise computing beyond super and into the Exascale Era.

Characterizing exascale technology

Technologically, the Exascale Era can be characterized by these three things:

  1. Flexible compute architectures to support a variety of processor, accelerator, and datacenter types
  2. System software stacks that bring together the performance and scalability of HPC with the flexibility, modularity, and user productivity of the cloud in one system
  3. Interconnects that unite performance, intelligence, and interoperability

We saw signs of the coming changes and began developing Shasta and Slingshot more than five years ago in response.

Our Shasta compute architecture is an entirely new design, built from the ground up to address the needs of exascale-era workloads. It supports a diversity of processor technologies, supports the converged use of analytics, AI, modeling, and simulation workloads, eliminates the distinction between supercomputers and clusters, and fuses HPC and AI workflows with the productivity of the cloud.

The Slingshot interconnect is different than any interconnect or fabric we, or anyone else, has ever built. In addition to high speed and low latency, Slingshot incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility.

Exascale as an inflection point In the buzz about ORNLโ€™s Frontier supercomputer and its sister exascale system, Argonne National Laboratoryโ€™s Aurora supercomputer, a key point gets easily sidelined.

Exascale technology embraces a new set of capabilities that extends far beyond these landmark supercomputers and is a technology inflection point for every datacenter. Like other major technological turning points โ€• Unix to Linux, the cloud, big data, AI, and more โ€• exascale may get its start with a few early adopters. But as with these other technologies, exascale computing capabilities will become mainstream because the market demands it.

It will become mainstream because commercial and government institutions of all sizes are dealing with the same levels of scale and data that were once only the domain of research computing. By 2025, IDC says worldwide data will grow 61% to 175 zettabytes. Everyone, from small private companies to the largest government labs, will be seeking ways to turn this ocean of data into actionable insight using new combinations of modeling, simulation, analytics, big data, IoT, and AI.

Itโ€™s no longer business as usual. The new requirements created by digital transformation, explosive data growth, and converging workloads touch every organization. And every organization will need access to the next generation of compute, interconnect, software, and storage technologies with new exascale-driven capabilities to embrace this new era of possibility.

We designed the Shasta hardware and software architecture and Slingshot interconnect to drive this inflection point. Their underlying capabilities will accelerate transformation by removing barriers to new workflow creation โ€• and power discovery and innovation across every industry and field of inquiry for years to come.

So to the Exascale Era we say, letโ€™s get started!


This blog originally published on cray.com and has been updated and published here on HPEโ€™s Advantage EX blog.



Brandon Draeger
Hewlett Packard Enterprise

twitter.com/brandondraeger
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/info/hpc

0 Kudos
About the Author

Brandon_Draeger

Brandon leads the Compute Product Marketing teams for HPE and joined the company in January 2020 as part of the Cray acquisition. Prior to Cray, Brandon held leadership roles in engineering, product management, marketing, and strategy at Intel, Dell, and Symantec.