- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- How supercomputing storage is helping bring the en...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
How supercomputing storage is helping bring the energy of the stars to earth
Learn how HPE supercomputing storage technology is removing roadblocks for the brilliant scientific minds striving to bring the energy source of the stars to earth and making fusion power a reality.
Some supercomputer simulations generate unimaginable amounts of data.
Take the following, for example. A team at the Princeton Plasma Physics Laboratory (PPPL) ran a single simulation on Oak Ridge National Laboratoryโs Summit supercomputer that generated 200 PB of data. If youโre more familiar with gigabytes that equates to 200,000,000 GB. Two hundred million gigabytes of data!
So why would somebody run a simulation that generates so much data that needs to be stored and analyzed? It should be for something really important, correct?
Well, it is for something really importantโfor bringing the energy source of the stars to earth. Fusion, the nuclear reaction that powers the Sun and the stars, is a potential source of safe, non-carbon emitting, continious and virtually limitless energy.
Harnessing fusion's power is the goal of the International Thermonuclear Experiment Reactor (ITER) which has been designed as the key experimental step between today's fusion research machines and tomorrow's fusion power plants.
As a 35-year project with 35 contributing countries, ITER has already been described as the most expensive science experiment of all time, the most complicated engineering project in human history, and one of the most ambitious human collaborations since the development of the International Space Station
With increasing concerns over climate change, we need new, better ways to meet humankindโs growing demand for energy. The benefits of fusion power make it an extremely attractive option worth pursuing:
- No carbon emissions. The only by-products of fusion reactions are small amounts of helium, an inert gas which can be safely released without harming the environment.
- Abundant fuels. Deuterium can be extracted from sea water and tritium will be produced inside the power station from lithium, an element abundant in the earthโs crust. Even with widespread adoption of fusion power stations, these fuel supplies would last for many thousands of years.
- Energy efficiency. One kilogram of fusion fuel could provide the same amount of energy as 10 million kilograms of fossil fuel. A 1,000 megawatt fusion power station will need less than one tonne of fuel during a yearโs operation.
- No radioactive waste than fission. There is no radioactive waste by-product from the fusion reaction.
- Safety. A large-scale nuclear accident is not possible in a fusion reactor. The amounts of fuel used in fusion devices are very small (about the weight of a postage stamp at any one time). Furthermore, as the fusion process is difficult to start and keep going, there is no risk of a runaway reaction which could lead to a meltdown. The fusion reactor simply stops when there is an issue.
- Constant power generation. Fusion power plants will be designed to produce a continuous supply of large amounts of electricity. Once established in the market, costs are predicted to be broadly similar to other energy sources.
One million components, 10 million parts ... the ITER fusion reactor will be the largest and most powerful fusion device in the world. Designed to produce 500 megawatts of fusion power for 50 megawatts of input heating power (a power amplification ratio of 10) by the year 2035, it will take its place in history as the first fusion device on earth to create net energy.
ITER construction is underway now in France. On the ITER site, the machine is taking shape as components are delivered from factories on three continents. Approximately 2,500 workers are currently participating in onsite building, assembly, and installation activities.
Fusion would be impossible without supercomputing
While todayโs petascale supercomputers have helped to get where we are today, the scientists working on making clean, abundant energy available for all humankind are constantly looking for even faster supercomputers to accelerate progress.
Earlier this year, the HPE-built Frontier supercomputer at ORNL reached an historic milestone when it broke through the exascale speed barrier and hit a full 1.1 exaflopsโfaster than the next seven systems on the Top500 list combined.
Here is what Amitava Bhattacharjee, principal investigator for Whole Device Modeling of Magnetically Confined Fusion Plasmas (WDMApp) in the Exascale Computing Project said in a recent interview when asked about what the new Frontier supercomputer will enable him to do:
โWe eagerly look forward to Frontier because, for the first time in computing history, we will be able to attempt a whole device model of a tokamak plasma at high fidelity using equations that cover the entire domain of the plasma. The predictions we can make from these kinds of simulations are really important for ITER to achieve its highest potential.โ
Where high-performance storage comes in
When it comes to data-intensive simulations it is not only the compute performance that matters but also the ability to store the simulation results in fast file systems for further analysis.
Circling back to the PPPL team, their simulation is being used to predict how ITERโs fusion reactor needs to be constructed to remove the exhaust head from the reactorโs vacuum vessel.
Each of the teamโs ITER simulations consisted of 2 trillion particles and more than 1,000 time steps, requiring most of the Summit machine and one full day or longer to complete. The data generated by one simulation could total a whopping 200 PB, eating up nearly all of Summitโs file system storage.
In a recent interview, simulation team leader C.S. Chang described the challenges on the storage side.
โSummitโs file system only holds 250 petabytesโ worth of data for all the users.There is no way to get all this data out to the file system, and we usually have to write out some parts of the physics data every 10 or more time steps [and throw away the results of the other steps].โ
This limitation on the storage side has proven challenging for the team, who often found new science in the data that was not saved in the first simulation. Chang also said that he would like to see reliable, large-compression-ratio data reduction technologies.
There is very good news for Chang and his team:
First, while the storage system of the Summit supercomputer โonlyโ has 250 PB usable capacity, the storage system of the Frontier supercomputer โ based on Cray ClusterStor E1000 technology โ offers neary 700 PB usable capacity to support also the most data-intensive simulations.
And secondly, recently HPE has made data compression functionality available as an option for the Cray ClusterStor E1000 storage system to help to cope with the relentless data explosion driven by ever increasing complexity of simulations.
But supercomputing storage is not just about the size of the file system. It also needs to be able to write simulations results very fast to the file system or โ if you are training large AI models at-scale โ be able to read huge amounts of data at very high speeds. And of course, it needs to provide all of it in a very cost-effective way.
Check out how the Cray ClusterStor E1000 storage system provides all of the capacity and performance requirements of supercomputing-sized simulations.
And stay tuned for our next supercomputing storage blog in which we will compare the cost-effectiveness with a real world example!
We wish all the brilliant scientists on the blue planet who are working on bringing the energy source of the stars to earth all the best and are honored that we can provide a small contribution to this critical initiative with our supercomputers and supercomputing storage systems.
Please follow these links if you want to learn more about HPE supercomputing or supercomputing storage.
Uli Plechschmidt
Hewlett Packard Enterprise
twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/hpc
UliPlechschmidt
Uli leads the product marketing function for high performance computing (HPC) storage. He joined HPE in January 2020 as part of the Cray acquisition. Prior to Cray, Uli held leadership roles in marketing, sales enablement, and sales at Seagate, Brocade Communications, and IBM.
- Back to Blog
- Newer Article
- Older Article
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
- kambizhakimi23 on: HPE extends supply chain security by adding AMD EP...
- pavement on: Tech Tip: Why you really donโt need VLANs and why ...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
133 -
HPC & SUPERCOMPUTING
133 -
Mission Critical
86 -
SMB
169