- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- A new HPC era needs HPC storage
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
A new HPC era needs HPC storage
Introducing the new Cray ClusterStor E1000 storage system that is purpose-engineered for the new HPC era, combining traditional simulation with analytics and AI workloads and running in mission-critical or business-critical workflows on the same machine.
This is not your grandparentsโ HPC storage
High-performance computing is transforming in response to ongoing trend of massive data growth. This is a good reason why organizations of all types are seeking to deliver maximum insight from their data to drive innovation. To achieve this outcome requires new analytics approaches that combine modeling and simulation with analytics and AI workloads. These new converged workloads will require new developer and operator workflows to power them that legacy HPC infrastructure cannot easily address.
These trends affect every industry and field of inquiryโand it is happening right now. For example, a recent study conducted by the independent analyst firm Intersect360 found out that the majority (61%) of the HPC users today already are running machine learning programs[1]. And an additional 10% of the respondents stated that they plan to do so by the end of the year 2020. This is an inflection point for a new era in computing commonly referred to as the Exascale Era.
As with previous inflection pointsโsuch as the rise of virtualization and the adoption of cloud, big data, and AI, legacy hardware and software infrastructure has had to radically evolve to keep up with new requirements. This time is no different. The new converged HPC, analytics, and AI workflows will be fueled by new dataflows that deliver the right data, at the right time, and with the right economics. Storage technology that worked for petascale era workloads cannot power the Exascale Eraโs converged workflows because the input/output (I/O) patterns of the applications and the characteristics of the currently deployed storage technologies could not be more different.
Traditional modeling and simulation typically have I/O patterns serially access larger datasets whereas AI/machine learning can include both batch and random I/O access that range in size from very small (i.e. a single inference) to very large (i.e. ML model training). As a result, sticking with current HPC storage infrastructure will leave users unable to keep up in terms of both performance and budget. In this new era, HPC users are stuck between a rock and a hard place that can only be addressed by a new type of HPC storage.
- With the traditional HPC storage systems, users will experience I/O bottlenecks for their AI/machine learning workloads as traditional HPC storage is not suited well to serve the large number of files of all sizes that machine learning needs to read in the training phase. That can lead to job pipeline congestion, missed deadlines, unsatisfied data scientists, and constant escalations.
- Alternately, if users try to scale their traditional enterprise AI storage to the potentially multi-petabyte requirements of converged workloads, they most likely will experience scalability issues and exploding storage costs.
The Cray ClusterStor E1000, new storage for a new era
Today we are launching the Cray ClusterStor E1000 storage system as an HPE product. It was purpose-engineered for this new era to be scalable and cost-effective, while at the same time delivering the performance needed to power a new kind of dataflow. Essentially, itโs a system that brings together the very best of traditional HPC storage systems with the best of modern all-flash enterprise file storage systems. The Cray ClusterStor E1000 system in combination with new services and flexible consumption models from HPE redefines what is possible for HPC storage users.
Here are just a few examples of what can be possible with this combination:
- Removing I/O bottlenecks through unprecedented performance by delivering up to 80 gigabytes per second throughput performance in just two rack units
- Achieving a balance of scale, performance, and performance efficiency by providing up to 3.3. gigabyte per second file system performance from just one NVMe Gen 4 SSD
- Delivering broad interoperability with any HPC cluster or supercomputer of any vendor that supports modern, high-speed interconnects like EDR/HDR InfiniBand, 100/200 Gigabit Ethernet or 200 Gbps Cray Slingshot
- Unifying the support for the full HPC infrastructure stack with HPE Pointnext Services and creating clear accountability for the providers of both HPC compute and storage
- Providing a future path to an โas-a-serviceโ model for the full HPC infrastructure stack with HPE GreenLake, an option that combines the agility and economics of public cloud consumption with the security and performance of on-premises HPC
Check out our HPC storage solutions including the proven HPE Data Management Framework (DMF) on our new HPC Storage homepage and see what new HPC storage from HPE can do for YOU in the Exascale Era.
To find out more about how the Cray ClusterStor E1000 storage system delivers its new capabilities for your organization, please read:
- Business white paper: The New HPC Era Needs New HPC Storage
- Technical white paper: Cray ClusterStor E1000 Storage System
[1] Intersect360 HPC User Budget Map Survey: Machine Learningโs Impact on HPC Environments, October 2019
Brandon Draeger
Hewlett Packard Enterprise
twitter.com/brandondraeger
linkedin.com/in/brandondraeger//
hpe.com/servers
- Back to Blog
- Newer Article
- Older Article
- PerryS on: Explore key updates and enhancements for HPE OneVi...
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- ComputeExperts on: Did you know that liquid cooling is currently avai...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
154 -
HPC & SUPERCOMPUTING
137 -
Mission Critical
87 -
SMB
169