Servers & Systems: The Right Compute
1767183 Members
3915 Online
108959 Solutions
New Article ๎ฅ‚
UliPlechschmidt

How supercomputing storage is helping bring the energy of the stars to earth

Learn how HPE supercomputing storage technology is removing roadblocks for the brilliant scientific minds striving to bring the energy source of the stars to earth and making fusion power a reality.

HPE-Supercomputing-Storage-Stars.png

Some supercomputer simulations generate unimaginable amounts of data.

Take the following, for example. A team at the Princeton Plasma Physics Laboratory (PPPL) ran a single simulation on Oak Ridge National Laboratoryโ€™s Summit supercomputer that generated 200 PB of data. If youโ€™re more familiar with gigabytes that equates to 200,000,000 GB. Two hundred million gigabytes of data!

So why would somebody run a simulation that generates so much data that needs to be stored and analyzed? It should be for something really important, correct?

Well, it is for something really importantโ€”for bringing the energy source of the stars to earth. Fusion, the nuclear reaction that powers the Sun and the stars, is a potential source of safe, non-carbon emitting, continious and virtually limitless energy.

Harnessing fusion's power is the goal of the International Thermonuclear Experiment Reactor (ITER) which has been designed as the key experimental step between today's fusion research machines and tomorrow's fusion power plants.

As a 35-year project with 35 contributing countries, ITER has already been described as the most expensive science experiment of all time, the most complicated engineering project in human history, and one of the most ambitious human collaborations since the development of the International Space Station

With increasing concerns over climate change, we need new, better ways to meet humankindโ€™s growing demand for energy. The benefits of fusion power make it an extremely attractive option worth pursuing:

  • No carbon emissions. The only by-products of fusion reactions are small amounts of helium, an inert gas which can be safely released without harming the environment.
  • Abundant fuels. Deuterium can be extracted from sea water and tritium will be produced inside the power station from lithium, an element abundant in the earthโ€™s crust. Even with widespread adoption of fusion power stations, these fuel supplies would last for many thousands of years.
  • Energy efficiency. One kilogram of fusion fuel could provide the same amount of energy as 10 million kilograms of fossil fuel. A 1,000 megawatt fusion power station will need less than one tonne of fuel during a yearโ€™s operation.
  • No radioactive waste than fission. There is no radioactive waste by-product from the fusion reaction.
  • Safety. A large-scale nuclear accident is not possible in a fusion reactor. The amounts of fuel used in fusion devices are very small (about the weight of a postage stamp at any one time). Furthermore, as the fusion process is difficult to start and keep going, there is no risk of a runaway reaction which could lead to a meltdown. The fusion reactor simply stops when there is an issue.
  • Constant power generation. Fusion power plants will be designed to produce a continuous supply of large amounts of electricity. Once established in the market, costs are predicted to be broadly similar to other energy sources.

One million components, 10 million parts ... the ITER fusion reactor will be the largest and most powerful fusion device in the world. Designed to produce 500 megawatts of fusion power for 50 megawatts of input heating power (a power amplification ratio of 10) by the year 2035, it will take its place in history as the first fusion device on earth to create net energy.

ITER construction is underway now in France. On the ITER site, the machine is taking shape as components are delivered from factories on three continents. Approximately 2,500 workers are currently participating in onsite building, assembly, and installation activities.

The ITER machine under construction. Worker on the right illustrates its sheer size.The ITER machine under construction. Worker on the right illustrates its sheer size.

Fusion would be impossible without supercomputing

While todayโ€™s petascale supercomputers have helped to get where we are today, the scientists working on making clean, abundant energy available for all humankind are constantly looking for even faster supercomputers to accelerate progress.

Earlier this year, the HPE-built Frontier supercomputer at ORNL reached an historic milestone when it broke through the exascale speed barrier and hit a full 1.1 exaflopsโ€”faster than the next seven systems on the Top500 list combined.

Here is what Amitava Bhattacharjee, principal investigator for Whole Device Modeling of Magnetically Confined Fusion Plasmas (WDMApp) in the Exascale Computing Project said in a recent interview when asked about what the new Frontier supercomputer will enable him to do:

โ€œWe eagerly look forward to Frontier because, for the first time in computing history, we will be able to attempt a whole device model of a tokamak plasma at high fidelity using equations that cover the entire domain of the plasma. The predictions we can make from these kinds of simulations are really important for ITER to achieve its highest potential.โ€

Where high-performance storage comes in

When it comes to data-intensive simulations it is not only the compute performance that matters but also the ability to store the simulation results in fast file systems for further analysis.

Circling back to the PPPL team, their simulation is being used to predict how ITERโ€™s fusion reactor needs to be constructed to remove the exhaust head from the reactorโ€™s vacuum vessel.

Each of the teamโ€™s ITER simulations consisted of 2 trillion particles and more than 1,000 time steps, requiring most of the Summit machine and one full day or longer to complete. The data generated by one simulation could total a whopping 200 PB, eating up nearly all of Summitโ€™s file system storage.

In a recent interview, simulation team leader C.S. Chang described the challenges on the storage side. 

โ€œSummitโ€™s file system only holds 250 petabytesโ€™ worth of data for all the users.There is no way to get all this data out to the file system, and we usually have to write out some parts of the physics data every 10 or more time steps [and throw away the results of the other steps].โ€

This limitation on the storage side has proven challenging for the team, who often found new science in the data that was not saved in the first simulation. Chang also said that he would like to see reliable, large-compression-ratio data reduction technologies.

There is very good news for Chang and his team:

First, while the storage system of the Summit supercomputer โ€œonlyโ€ has 250 PB usable capacity, the storage system of the Frontier supercomputer โ€” based on Cray ClusterStor E1000 technology โ€” offers neary 700 PB usable capacity to support also the most data-intensive simulations.

And secondly, recently HPE has made data compression functionality available as an option for the Cray ClusterStor E1000 storage system to help to cope with the relentless data explosion driven by ever increasing complexity of simulations.

But supercomputing storage is not just about the size of the file system. It also needs to be able to write simulations results very fast to the file system or โ€” if you are training large AI models at-scale โ€” be able to read huge amounts of data at very high speeds. And of course, it needs to provide all of it in a very cost-effective way.

Check out how the Cray ClusterStor E1000 storage system provides all of the capacity and performance requirements of supercomputing-sized simulations.

And stay tuned for our next supercomputing storage blog in which we will compare the cost-effectiveness with a real world example!

We wish all the brilliant scientists on the blue planet who are working on bringing the energy source of the stars to earth all the best and are honored that we can provide a small contribution to this critical initiative with our supercomputers and supercomputing storage systems.

Please follow these links if you want to learn more about HPE  supercomputing or supercomputing storage.


Uli Plechschmidt
Hewlett Packard Enterprise

twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/hpc

About the Author

UliPlechschmidt

Uli leads the product marketing function for high performance computing (HPC) storage. He joined HPE in January 2020 as part of the Cray acquisition. Prior to Cray, Uli held leadership roles in marketing, sales enablement, and sales at Seagate, Brocade Communications, and IBM.