- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- NASA achieves optimal energy efficiency with its f...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
NASA achieves optimal energy efficiency with its first modular supercomputer
NASA has adopted a novel approach to cooling that not only enhances data center performance, but also conserves electricity and water. NASAโs first modular supercomputer, Electra, is changing the game for researchers.
As organizations tackle increasingly complex and data-heavy challenges, high performance computing (HPC) systems are working overtime to quickly execute workloads and streamline data center operations. As a result, developers are striving to deploy a new breed of HPC system that combines extreme speed and density with superior energy efficiency. Organizations with large-scale data centers are beginning to adopt a new approach to energy usage, leveraging powerful and eco-friendly solutions to turbocharge operational performance.
As the needs and densities of servers and data centers increase, organizations are turning to advanced methods for cooling to bolster their HPC environments. Today, NASA is achieving new levels of water and power efficiency and performance of the Modular Supercomputing Facility (MSF) at NASAโs Ames Research Center in Silicon Valley. Using technologies from Hewlett Packard Enterprise (HPE) and other partners, NASAโs first module-based supercomputer, Electra, ranked 33rd on the November 2017 TOP500 list of the worldโs most powerful supercomputers. The MSF uses a combination of natural resources and adiabatic technology, consuming less than 10% of the energy used in traditional supercomputing facilities.
Optimizing energy efficiency for HPC
NASA has adopted a novel approach to cooling that not only enhances data center performance, but also conserves electricity and water. Electraโs new module employs a combination of outdoor air and adiabatic coolers on the roof to rapidly cool the system. More specifically, warm air is drawn through water-moistened pads, and as the water evaporates, the air is chilled and pushed out. Additionally, the new module adds four HPE E-cells to deliver even greater efficiency. An E-cell is a sealed unit which uses a closed-loop cooling technology to release heated air outside the data center in order to ensure 100% heat removal. By transporting facility-supplied water into the system, E-cells help to rapidly and continuously cool the system, without compromising on performance or cost.
In its first year, Electra used 95% less water than a traditional data center environment, and it is expected to save 1,000,000 kilowatts per hour and 1,300,00 gallons of water each year. This can provide users an additional 280 million hours of compute time each year as well as an additional 685 million hours of compute time to augment its sister system, Pleiades.
According to Bill Thigpen, Chief of the Advanced Computing Branch at Amesโ NASA Advanced Supercomputing (NAS) Facility, โThis is a different way for NASA to do supercomputing in a cost-effective manner. It makes it possible for us to be flexible and add computing resources as needed, and we can save about $35 million dollarsโabout half the cost of building another big facility.โ Thigpen is also Deputy Project Manager for NASA's High-End Computing Capability Project.
โThis is a different way for NASA to do supercomputing in a cost-effective manner. It makes it possible for us to be flexible and add computing resources as needed, and we can save about $35 million dollarsโabout half the cost of building another big facility.โ - Bill Thigpen, Chief of the Advanced Computing Branch at Amesโ NASA Advanced Supercomputing (NAS) Facility,
Deploying energy-optimized solutions
Electraโs new module is based on the HPE SGI 8600, a scalable, high-density clustered supercomputer that utilizes liquid cooling to achieve maximum efficiency and substantial savings in energy usage. This leading-edge system is based on E-cells, each containing two 42U-high E-racks separated by a cooling rack. The E-cells are comprised of 1,152 nodes with dual 20-core Intelยฎ Xeonยฎ Gold 6148 processors, increasing Electraโs theoretical peak performance from 1.23 petaflops to 4.79 petaflops. And with 24 racks (or 2,304 nodes), 78,336 total cores, and 368 terabytes of memory, Electra is engineered with the speed and robust compute capabilities to handle NASAโs most challenging workloads.
Based on the success of the new module, NASA is considering an expansion of up to 16x the current capabilities of the modular environment. This effort would enable scientists and engineers nationwide to harness Electra for their research supporting NASA missions.
To learn more about the benefits of liquid cooling versus. air cooling, I invite you to visit me on Twitter at @Bill_Mannel. And check out @HPE_HPC for the latest news and updates in HPC innovation.
If you want to learn more about HPC and advanced technologies at work today:
- Podcast: Inside story on HPC's role in the Bridges research project at Pittsburgh Supercomputing Center
- HPC, weather prediction, and how you know it's going to rain
- Exascale computing: The Space Race of our time
- Want to know the future of technology? Sign up for weekly insights and resources
Featured articles:
- Tune up your career in high-performance supercomputing
- Want to know the future of technology? Sign up for weekly insights and resources
- Back to Blog
- Newer Article
- Older Article
- PerryS on: Explore key updates and enhancements for HPE OneVi...
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- ComputeExperts on: Did you know that liquid cooling is currently avai...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
154 -
HPC & SUPERCOMPUTING
137 -
Mission Critical
87 -
SMB
169