- Integrated Systems
- About Us
- Integrated Systems
- About Us
HPC fast file storage: What’s happened so far in 2021?
It doesn't matter how big your organization is or what mission or business objectives you pursue. If you’re using modeling and simulation or artificial intelligence or high performance data analytics, HPE has parallel storage options for you. You can start wherever you want, then go to wherever you need—without limits.
Because it’s not our grandfather’s parallel storage anymore
Hyperion Research recently published a review of The report provides high performance computing (HPC) storage highlights plus analysis of significant activity that occurred within the global HPC community in the first six months of this year. Here are three noteworthy highlights from the report.
1. The undisputed pre-exascale and exascale leadership of HPE
The report calls out 10 noteworthy global HPC storage projects/announcements of which eight are planned for implementation until 2023. For reference, here’s a list of the eight projects called out in the report:
- “LUMI” from EuroHPC JU in Finland
- “ASPIRE 1” successor at Singapore NSCC
- “ALPS” at CSCS in 2023
- “Perlmutter” at NERSC
- “El Capitan” at LLNL
- “Frontier” at OLCF
- New climate supercomputer at the UK Met Office
- “Fugaku” at RIKEN
And here’s the exciting news: Seven of those eight midterm projects have one thing in common: They all use HPC storage from HPE
All of the first seven are using or will use our flagship product, the Cray ClusterStor E1000 Storage System. In additional, some of them will integrate the newest member of the family, HPE Parallel File System Storage, as well as our proven data management framework for parallel storage, HPE DMF V7.
After the publication of the Hyperion report Argonne National Laboratory (ANL) has announced a new supercomputer, which Argonne has named Polaris, that will use the Grand and Eagle file systems. Both 100 petabyte file systems are built on Cray ClusterStor E1000 Storage Systems.
Considering that Cray ClusterStor E1000 Storage Systems were just announced in July 2020, the rapid and global adoption is truly noteworthy. Take a look at the graphic below for an at-a-glance overview of the parallel storage portfolio of HPE for clustered CPU/GPU nodes that are running modeling and simulation (Mod/Sim), artificial intelligence (machine learning and deep learning), or high performance data analytics (HPDA) workloads.
2. HPE GreenLake Cloud Services for HPC
In the business model section of the report, Hyperion Research calls out that “consumption and utility models are emerging as evidenced by HPE’s support of HPC workloads on HPE GreenLake.”
We describe the HPE GreenLake edge-to-cloud platform as the cloud that comes to you, wherever your apps and data live. The HPE GreenLake edge-to-cloud platform is the market leader, accelerating outcomes in four ways:
- Gain self-service agility—Easily deploy resources, view your costs, and forecast capacity all from one intuitive platform, HPE GreenLake Central
- Flex with pay-per-use—Avoid heavy upfront costs and expensive overprovisioning and only pay for what you use
- Scale up and down—Reduce your worry and your costs with scalable capacity that’s ready when you need it
- Managed for you—Offload the burden of operating IT and free up resources with fully-managed cloud services
In June 2021, HPE announced the general availability of HPE GreenLake Cloud Services for HPC. HPE GreenLake for High Performance Computing is an on-premises, end-to-end solution that makes it easier for a much broader range of customers of all sizes to leverage the power of supercomputers in a pay-per-use model for deploying HPC and AI applications, workloads, and models.
With that, we now provide users with the broadest range of offerings—giving you more choice depending on your needs and preferences, as illustrated in the graphic below.
HPE recently announced that we have been awarded a $2B contract (to be leveraged over a 10-year period) with the National Security Agency (NSA) to deliver HPE’s HPC technology as-a-service through the HPE GreenLake platform. To date, this is the largest contract of this sort, indicating that this innovative and new offering that is gaining traction fast.
3, IBM Spectrum Scale: Embedded in an HPE storage product
In the special report, Hyperion Research also notes: Another interesting file system development was HPE’s support of IBM’s Spectrum Scale file system with HPE HPC storage hardware. This move acknowledges and addresses the trend of enterprise IT datacenters adoption HPC-enabled AI techniques to deploy and manage their increasing adoption of AI workloads.
Hyperion makes a good point when highlighting enterprise adoption of HPC-enabled AI techniques. Since the announcement of HPE Parallel File System Storage, about two-thirds of the projects are attaching as fast file storage to clusters of GPU-accelerated HPE Apollo 6500 Gen10 Plus Systems for training AI models.
For an even more compelling solution, consider our recent acquisition of Determined AI, a San Francisco-based startup that delivers a powerful, robust software stack to train AI models faster, at any scale, using an open source machine learning platform.
By making IBM Spectrum Scale available in an HPE storage system, we’ve rounded out our technology portfolio—enabling organizations of all sizes to consume HPC and AI technology in the most effective way based on your needs and preferences. The graphic below shows the technology choices available in the HPE HPC and AI portfolio.
Here’s a simple, visual way to think about the positioning of our Lustre-based Cray ClusterStor E1000 Storage System and our IBM-Spectrum Scale-based HPE Parallel File System Storage.
Just think what’s still ahead
HPE can solve HPC and AI Storage challenges for organizations of all sizes—from building a parallel file system from only 12 storage drives with HPE Parallel File System Storage, to the more than 50,000 drives in the Cray ClusterStor E1000-based Orion storage system at the Oak Ridge Leadership Computing Facility (OLCF).
Are you currently facing one or more of these HPC and AI storage challenges?
- Job pipeline congestion due to input/output (IO) bottlenecks leading to missed deadlines/top talent attrition
- High operational cost of multiple “storage islands” due to scalability limitations of current file storage
- Exploding costs for fast file storage at the expense of GPU and/or CPU compute nodes or of other critical business initiative
Don’t wait to find the right solution. Contact your HPE representative today.
Here are some additional resources to help guide your decisions.
Business paper: Spend less on HPC/AI storage and more on CPU/GPU compute
Hewlett Packard Enterprise
Uli leads the product marketing function for high performance computing (HPC) storage. He joined HPE in January 2020 as part of the Cray acquisition. Prior to Cray, Uli held leadership roles in marketing, sales enablement, and sales at Seagate, Brocade Communications, and IBM.