Servers: The Right Compute
cancel
Showing results for 
Search instead for 
Did you mean: 

HPE Leading the Charge to Exascale Computing

MikeVildibill

    

Exascale computing is the next generation of HPC and promises to allow us to process data, run systems, and solve problems at a whole new scale. HPE is leading the charge.

Gartner recently estimated that 20.8 billion connected devices will be in use by 2020. This rapidly expanding Internet of Things is expected to generate an unprecedented volume of data at a faster pace than the ability to process, store, manage, and secure it using existing computing architectures.

Even today's most powerful computers can't sustain the pace of innovation needed to serve a globally connected market. It’s time for the next level of computing capabilities. It’s time for exascale.

Advanced computing is now vital to scientific progress, technological ExascaleBlog.jpginnovation, economic vitality, and national security. Nations across the globe, including the U.S., China, Japan, Russia, India, and the European Union, are making substantial investments in the next generation of high-performance computing (HPC) tools in order to gain a huge competitive advantage, uncover new insights, and solve their most crucial challenges. Exascale computing offers the potential to leverage technology to solve the most complex scientific issues, from precision medicine for cancer research, to astrophysics, to climate change and nuclear science.

To help facilitate the development of an exascale computer by the dawn of the next decade, the Department of Energy (DOE) released the PathForward initiative as a central element of the Exascale Computing Program (ECP) Hardware Technology effort. The initiative is geared to achieving a capable exascale system, or a supercomputer that can solve scientific problems fifty times faster and more complex than the largest computing systems available today.

Hewlett Packard Enterprise (HPE) has been selected as one of the four vendors to enter Phase I of Project PathForward. This grant by the DOE is a strong validation of HPE’s strategy, vision and ability to successfully lead the charge to exascale computing. The DOE’s funding will help us achieve the goal of building the next generation machine that will enable our customers to propel their current HPC infrastructure into the future.

While the capabilities offered by exascale computing will undoubtedly drive national competitiveness and improvements to business productivity, this shift also presents significant technology challenges. The chief challenge is application efficiency at scale; in current HPC systems, there tends to be a dramatic diminishing return on performance as application size grows. Delivering exascale-level system performance within the DOE’s defined power envelope of 20-30 megawatts is no simple task, and raises serious concerns about cost and environmental impact. Processing multi-threaded algorithms and analyzing large data volumes at exascale speeds presents tremendous data movement, memory and fabric bandwidth, and latency issues. Lastly, extreme-scale systems are often unique and require extensive customization or optimization of applications to effectively utilize, making it highly complex and cost-prohibitive to manage systems of this scale on a daily basis. 

There’s no denying that revolutionary new approaches to computing will be required in order to achieve exascale. In fact, HPE believes that it will require a complete rethinking of the fundamental architecture on which computers have been built for the last 60 years.

Architecture innovation

Systems that make memory – not processing – the core of the computer architecture will minimize data movement at scale. Inspired by the Machine, our largest research project, HPE is testing the concept of “memory-driven computing” as a way to rapidly increase computing efficiency and speed. Innovative system design approaches will integrate the latest memory, fabric, and processor technologies to break the dependencies that come with outdated, siloed protocols and address memory and fabric bandwidth constraints.

System technology innovation

The memory subsystem must be upgraded to enable more efficient data motion in and out of the compute engine, and HPE is aiming to transform the memory technology and packaging of current HPC systems. Because exascale machines will inevitably scale to hundreds of thousands of nodes, multi-dimensional all-to-all topologies like the HyperX will deliver reduced network latency through fewer switch hops. The commercialization of silicon photonics will drive the use of optical technologies as a path to energizing an exascale system’s fabric without exceeding the DOE’s power envelope. And emerging NVM technologies such as the memristor will provide the throughput to compete with DDR, but in a persistent and more energy-efficient way.

Technology ecosystem innovation

As part of the GenZ consortium, HPE is contributing to an industry-wide effort to create a new and open protocol which will increase the flexibility and workload optimization of system architectures. This open architecture is the first crucial step toward building a vibrant innovation ecosystem that will drive the industry to rethink how computing systems are built in order to achieve unprecedented levels of performance.

Exascale computing is the next generation of HPC and promises to allow us to process data, run systems, and solve problems at a whole new scale. HPE is leading the charge to this next wave of computing capabilities, and helping to bring new technologies to the mainstream which will deliver better systems balance and greater computing efficiency than ever before.

Don’t miss a minute of this exciting journey – follow HPE on Twitter @HPE_HPC ‏ for the latest news and updates from the path to exascale.


Mike Vildibill
VP Exascale Development, Federal Programs & HPC Storage groups
Hewlett Packard Enterprise

twitter.gif @HPE_HPC
linkedin.gif mvildibill

0 Kudos
About the Author

MikeVildibill

As Vice President of HPE’s Exascale Development, Federal Programs and HPC Storage groups, I am responsible for product strategy, engineering and advanced technologies development. With 25 years of experience in HPC, I have held executive positions at Sun Microsystems (acquired by Oracle), Appro (acquired by Cray Inc.), and DataDirect Networks. My varied responsibilities have included Product Management, Server Product Development, Sales leadership, Marketing and research.