- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- Discover the 10 new superlatives of HPC storage
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Discover the 10 new superlatives of HPC storage
The rapid adoption of AI is breaking legacy storage architectures both architecturally and economically. Learn how organizations of all sizes are adapting and setting new records with HPC storage from HPE.
Gartner once has stated that “There will be no way to put the storage beast on a diet.” That was even before the convergence of artificial intelligence (AI) and classic modeling & simulation (mod/sim) demanded even higher storage performance with even higher capacities.
HPE saw this coming and took action. The acquisition of Cray not only added intellectual property on the software, compute and interconnect layer, but also has resulted in the creation of an HPC storage portfolio that enables users of AI and mod/sim workloads in organizations of all sizes to put the storage beast on an effective diet.
This blog shares 10 specific examples in 10 different categories where new benchmarks are being set for parallel HPC storage that is feeding data to supercomputers or clusters of rack servers running AI and mod/sim workloads.
1 & 2. The largest and fastest parallel file system
The Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy high-performance computing user facility, recently announced the specifications of its new Orion file system. Among other systems at OLCF, Orion will support the upcoming Frontier exascale supercomputer that will feature four AMD GPUs for each AMD CPU. Orion is based on Cray ClusterStor E1000 Storage Systems and as a hybrid file system features three storage tiers:
- Flash-based performance tier of 5,400 nonvolatile memory express (NVMe) drives providing 11.5 PB of capacity at peak read-write speeds of 10 TB/s
- Hard-disk-based capacity tier of 47,700 perpendicular magnetic recording drives providing 679 PB of capacity at peak read speeds of 5.5 TB/s and peak write speeds of 4.6 TB/s
- Flash-based metadata tier of 480 NVMe devices providing an additional capacity of 10 PB
This represents the new high watermark for external high performance file systems—both for the largest storage capacity and well has for the fastest performance.
3. The largest all-flash parallel file system
When it comes to all-flash parallel file systems the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory is setting the bar. Its next-generation supercomputer, Perlmutter, includes an all-flash file system with 35 PB usable capacity built from Cray ClusterStor E1000 storage systems. This all-flash file system will provide very high-bandwidth storage to the HPE Cray supercomputer that in phase 1 features compute nodes with four NVIDIA GPUs per AMD CPU.
4. The largest twin parallel file systems
The Argonne Leadership Computing Facility (ALCF) has deployed a unique twin design with two identical file systems with 100 PB of usable storage in 10 Cray ClusterStor E1000 storage systems racks each. Interested parties can find all the details and use case for those twins—named Grand and Eagle—in the ALCF talk “Lustre at Scale” at the Lustre User Group 2021.
5. The fastest restore speed of parallel file system data
But new records are also set outside of the classic supercomputing leadership sites where the confluence of classic simulation with AI is changing advanced computing as we know it. A good example is the collaboration of zenseact and HPE to develop next generation autonomous driving cars—on an end-to-end infrastructure that is delivered with the HPE GreenLake edge-to-cloud platform.
The solution at zensact requires the ability protect and restore data at very high (record) speeds in order to hit the business-critical simulation window should a restore of the data be necessary. HPE Data Management Framework 7 (DMF) running on HPE ProLiant DL rack servers could meet the requirement of restoring petabytes of data with about 200 GB/s.
6. The most secret parallel file system
HPE recently announced that it has been awarded a $2B contract with the National Security Agency (NSA) to deliver HPE’s HPC technology as a service through the HPE GreenLake edge-to-cloud platform. The new collaboration will enable the NSA to harness rapidly growing AI and data needs more efficiently to create insights and other forecasting and analysis with optimal performance. By using HPE’s HPC solutions through the HPE GreenLake platform, which provides fully managed, secure cloud services on-premises, the NSA will benefit from an agile, flexible, and secure platform to meet their growing data management requirements.
Due to the nature of the client no more details can be shared regarding parallel storage but it might be very large and very fast. The public most likely will never know…
7. The most "circular" parallel file system
A great solution in the energy industry we actually can talk about is ENI’s latest supercomputer, which doubles storage capacity and increases sustainability, to accelerate discovery of new energy sources with advanced mod/sim capabilities, while reducing operational costs and energy consumption.
The new supercomputer, delivered as a service through the HPE GreenLake edge-to-cloud platform, include the Cray ClusterStor E1000 storage systems and the HPE Data Management Framework 7 to support complex, image-intensive workloads in modeling and simulation.
It’s housed in ENI’s famous Green Data Center in Ferrera Erbognone, a province in Pavia, Italy and improves energy usage and reduce electronic waste by using HPE Asset Upcycling Services. This is part of the circular economy initiative from HPE Financial Services, which leverages asset longevity to reuse products, by recycling equipment like Cray ClusterStor L300 storage systems from its existing HPC4 system and replacing it with newer solutions.
8. The lowest carbon footprint parallel file system
Another example for “green” supercomputing is one of the pan-European pre-exascale supercomputers, LUMI, that currently is in implementation at CSC’s data center in Kajaani, Finland. The supercomputer will be hosted by the LUMI consortium. The LUMI (Large Unified Modern Infrastructure) consortium countries are Finland, Belgium, Czech Republic, Denmark, Estonia, Iceland, Norway, Poland, Sweden, and Switzerland.
The LUMI storage system consists of a total of 117 PB usable capacity:
- 7 PB all flash Cray ClusterStor E1000 storage systems
- 80 PB HDD-based Cray ClusterStor E1000 storage systems tiered with HPE DMF 7 to
- 30 PB of CEPH-based object storage running on HPE Apollo 4200 Gen10 systems
The LUMI supercomputer will use 100% hydroelectric energy, and the heat it generates will be captured and used to heat homes and commercial premises in the area, making LUMI one of the most environmentally efficient supercomputers in the world.
9. The longest-serving large scale parallel file system
The “longest serving large-scale parallel file system storage award” goes the ClusterStor storage system of the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois Urbana-Champaign. So far, it’s served data for more than 40 billion core-hours to thousands of scientists and engineers. Large-scale production began in March 2013—when it was the world’s fastest parallel file system with more than 1 TB/sec aggregate bandwidth from more than 25 PB usable storage capacity.
The Blue Waters supercomputer and its Cray ClusterStor file system recently celebrated its 8th birthday!
But what about the AI users that do not want or cannot invest in large scale clusters or supercomputers?
For them we have the recently announced HPE Parallel File System Storage that delivers an IBM Spectrum Scale (FKA GPFS)-based parallel file system starting already with 12 (!) storage drives (HDD or NVMe SSD) in four HPE ProLiant DL325 Gen10 Plus-based storage servers.
10. The smallest parallel file system
While that wins the “smallest parallel file system award,” this generally available HPE storage product scales beyond 20 petabytes usable capacity and terabyte per second speeds today. It delivers very efficient performance, especially when compared with NFS-based scale-out network attached storage (NAS) like Dell EMC Isilon.
HPE Parallel File System Storage in its entry configuration with just 12 NVMe SSD delivers about 35 GB/s throughput (read) while the high-end Dell EMC Isilon F800 model “just” delivers 15 GB/s from 60 SSDs (see datasheet). That is 57% less data throughput from 150% more SSDs.
Many organizations used enterprise scale-out NAS like Dell EMC Isilon or NetApp AFF to feed their HPC clusters with data when the clusters were small. Now as the clusters and their data are expanding fast with the growth of the AI and mod/sim workloads, for many organizations NFS-based NAS storage is either breaking economically ($ per terabyte) or architecturally (performance/scalability).
This most likely is why Hyperion Research has found in its 2020 special study that the use of NFS-based storage is shrinking while more and more organizations are going parallel to cope with the growth of AI and mod/sim workloads.
Source: Hyperion Research, Special Study: Shifts Are Occurring in the File System Landscape, June 2020
If you want to understand why these shifts are happening, please read this business white paper.
HPE has the right HPC storage for AI and mod/sim workloads for organizations of all sizes
We are the right partner for you to go parallel for storage—whether you want to start with 12 drives with HPE Parallel File System Storage, or if you are looking for a 50,000+ drive parallel storage system like ORNL with Orion.
- Compute options
Supercomputers like the HPE Cray EX, or ultra-dense CPU/GPU systems like the HPE Apollo 2000 Gen10 Plus and the HPE Apollo 6500 Gen10 Plus, or standard density rack servers like HPE ProLiant DL - Interconnect options
High speed, low latency networks like HPE Slingshot, InfiniBand HDR, or 100/200 Gigabit Ethernet - Parallel file system options
The leading parallel file system in research (Lustre) embedded in Cray ClusterStor E1000 storage systems, or the leading parallel file system in the enterprise (IBM Spectrum Scale) embedded in HPE Parallel File System Storage - Parallel data protection solutions
HDD- or Tape-based backup/archive/restore for parallel data—on-premise or off-premise (co-location or public cloud) with HPE Data Management Framework 7 - Consumption options
Purchasing, financing with HPE Financial Services, or fully managed as-a-service models with the HPE GreenLake edge-to-cloud platform
Put the storage beast on an effective diet. Go parallel with HPE. Contact your HPE representative today!
Do we have your attention? Check out these resources for more information!
Hear about the latest HPC and AI advancements: Visit the Discover More Network
View the infographic: Accelerate your innovation with HPC—with HPE as your end-to-end partner
Download the business paper: Spend less on HPC/AI storage and more on CPU/GPU compute
Visit the webpage: HPC & AI Storage
Related articles:
- When should old data be deleted? via HPE's Enterprise.nxt
- Managing storage: It's all about the data via HPE's Enterprise.nxt
- Stay current on top tech trends and expert advice. Sign up for the weekly newsletter.
Uli Plechschmidt
Hewlett Packard Enterprise
twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/us/en/solutions/hpc
UliPlechschmidt
Uli leads the product marketing function for high performance computing (HPC) storage. He joined HPE in January 2020 as part of the Cray acquisition. Prior to Cray, Uli held leadership roles in marketing, sales enablement, and sales at Seagate, Brocade Communications, and IBM.
- Back to Blog
- Newer Article
- Older Article
- PerryS on: Explore key updates and enhancements for HPE OneVi...
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- ComputeExperts on: Did you know that liquid cooling is currently avai...
- Jams_C_Servers on: If you’re not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYC™ 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
155 -
HPC & SUPERCOMPUTING
138 -
Mission Critical
88 -
SMB
169