- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- Exascale supercomputers signal a new era of discov...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Exascale supercomputers signal a new era of discovery
The announcement of Frontier, the U.S. Department of Energyโs exascale supercomputer, signals the beginning of a new era of computing. New capabilities for a new set of workloads are coming together to create the next major inflection point for the Exascale Era.
The road to exascale computing started out as a journey. Now, we find ourselves at the beginning of a new era.
Recently, Cray and the U.S. Department of Energy (DOE) announced Frontierโan exascale supercomputer being developed for Oak Ridge National Laboratory (ORNL). Slated for delivery in 2021, Frontier is expected to be the worldโs most powerful computer. It will advance science and innovation far beyond anything currently possible.
For us at Cray, the announcement of Frontier is a thrillingโand humblingโmoment. The system is the third contract win for our new Shastaโข architecture and Cray Slingshotโข interconnect. It validates our belief in and commitment to our Shasta technology. But more importantly, Frontier underscores why our company exists. Supercomputers in and of themselves donโt change the world. But put in the hands of scientists they can and do.
Building the tools that free institutions and individuals to solve problems that affect the health, safety, security, and longevity of our world is an honorโand has defined Cray since the beginning.
But this time itโs different. With the crossing of the exascale threshold, weโre entering a new era of computing. Why? Because the questions have changed, the workloads have changed, and the kinds of organizations doing the asking have changed. How we compute must change, too.
What makes exascale an era
Exascale is more than a machine or a speed milestone. Itโs about new capabilities for a new set of workloads coming together to create the next major inflection point for computing.
We can attribute this transformation to several fundamental shifts.
First, macro trends in research and enterprise are driving a shift to data-intensive computing. Weโre seeing organizations of all sizes grappling with explosive data growth which presents a tremendous opportunity for those that can effectively harness it for new discovery, innovation, and insights. Weโre seeing this demand in advanced research labs creating an explosion of HPC and AI workloads. And weโre also seeing integration of these same workloads into critical applications and processes as digital transformation becomes a business imperative.
In response to these trends, workloads are fusing. AI, analytics, IoT, simulation, and modeling are all converging into one business-critical workflowโa workflow that must operate at extreme scale and often in real-time.
This combination of data and compute intensive workloads โ and the flexibility to bring simulation, modeling, analytics, and AI workflows togetherโexceeds the capabilities of todayโs datacenter infrastructure.
Together, these realities are forcing our industry to rethink compute, networking, software, and storage architectures to deliver an infrastructure ready for this new era.
With the demise of Mooreโs Law, we can no longer apply a brute force approach or add hundreds to thousands of virtual servers on the cloud to solve the problem. We need a more intelligent and purposeful approach that brings the best of HPC and the cloud together to deliver a step function increase in capability.
This new capability will take research and enterprise computing beyond super and into the Exascale Era.
Characterizing exascale technology
Technologically, the Exascale Era can be characterized by these three things:
- Flexible compute architectures to support a variety of processor, accelerator, and datacenter types
- System software stacks that bring together the performance and scalability of HPC with the flexibility, modularity, and user productivity of the cloud in one system
- Interconnects that unite performance, intelligence, and interoperability
We saw signs of the coming changes and began developing Shasta and Slingshot more than five years ago in response.
Our Shasta compute architecture is an entirely new design, built from the ground up to address the needs of exascale-era workloads. It supports a diversity of processor technologies, supports the converged use of analytics, AI, modeling, and simulation workloads, eliminates the distinction between supercomputers and clusters, and fuses HPC and AI workflows with the productivity of the cloud.
The Slingshot interconnect is different than any interconnect or fabric we, or anyone else, has ever built. In addition to high speed and low latency, Slingshot incorporates intelligent features that enable diverse workloads to run simultaneously across the system. It includes novel adaptive routing, quality-of-service, and congestion management feature while retaining full Ethernet compatibility.
Exascale as an inflection point In the buzz about ORNLโs Frontier supercomputer and its sister exascale system, Argonne National Laboratoryโs Aurora supercomputer, a key point gets easily sidelined.
Exascale technology embraces a new set of capabilities that extends far beyond these landmark supercomputers and is a technology inflection point for every datacenter. Like other major technological turning points โ Unix to Linux, the cloud, big data, AI, and more โ exascale may get its start with a few early adopters. But as with these other technologies, exascale computing capabilities will become mainstream because the market demands it.
It will become mainstream because commercial and government institutions of all sizes are dealing with the same levels of scale and data that were once only the domain of research computing. By 2025, IDC says worldwide data will grow 61% to 175 zettabytes. Everyone, from small private companies to the largest government labs, will be seeking ways to turn this ocean of data into actionable insight using new combinations of modeling, simulation, analytics, big data, IoT, and AI.
Itโs no longer business as usual. The new requirements created by digital transformation, explosive data growth, and converging workloads touch every organization. And every organization will need access to the next generation of compute, interconnect, software, and storage technologies with new exascale-driven capabilities to embrace this new era of possibility.
We designed the Shasta hardware and software architecture and Slingshot interconnect to drive this inflection point. Their underlying capabilities will accelerate transformation by removing barriers to new workflow creation โ and power discovery and innovation across every industry and field of inquiry for years to come.
So to the Exascale Era we say, letโs get started!
This blog originally published on cray.com and has been updated and published here on HPEโs Advantage EX blog.
Brandon Draeger
Hewlett Packard Enterprise
twitter.com/brandondraeger
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/info/hpc
- Back to Blog
- Newer Article
- Older Article
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
- kambizhakimi23 on: HPE extends supply chain security by adding AMD EP...
- pavement on: Tech Tip: Why you really donโt need VLANs and why ...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
129 -
HPC & SUPERCOMPUTING
131 -
Mission Critical
86 -
SMB
169