- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- HPE advanced memory technology speeds insight in l...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
HPE advanced memory technology speeds insight in large-scale AI projects
In artificial intelligence projects that use vast and complex datasets, the large, shared memory of HPE Superdome Flex speeds insight by eliminating data re-assembly. Learn how AI leaders are already leveraging Superdome Flex to facilitate more robust AI projects, workloads and data.
Although we often talk about artificial intelligence (AI) as a single discipline, those two letters really stand for a broad range of projects, workloads, and data.
Enterprises across the world are innovating with AI, but each is striving for different goals and using different technologies. For example, most organizations will move away from big data towards small and wide data according to Gartner. Wide data combines "a variety of small and large, unstructured, and structured data sources" to find associated links and facilitate more robust AI. And as AI evolves and diversifies, we are likely to need more diverse artificial intelligence infrastructures to meet specialized needs.
So, how can you determine what is the best platform to run your AI project on? The simple answer is โ it all depends on your goals and the type of data youโre dealing with.
In a variety of AI use cases โ specifically when datasets are large and complex, with relationships between data elements that make partitioning difficult โ it is often better to keep the dataset in one piece. This speeds up insights, as you can avoid data transport and data re-assembly after analysis.
Benefits of large, shared memory
AI-focused systems often feature high-speed interconnects, which can move large amounts of data very quickly between each processor. But in scale-out AI clusters, moving data between nodes can lead to performance bottlenecks. There is latency when data is moved between servers, and it takes time to re-assemble large and complex datasets that have been broken up and distributed across the cluster.
In these use cases, the modular, scale-up architecture of the HPE Superdome Flex family improves time to insight, and delivers enhanced insights as compared to scale-out solutions. HPE Superdome Flex can scale up from 4 to 32 sockets and from less than 1TB to 48 TB of shared memory, to meet the largest in-memory AI needs in your data center. HPE Superdome Flex 280 scales up from 2 to 8 sockets and from 64 GB to 24 TB of shared memory, using DRAM only or in combination with persistent memory. With these platforms you can solve key challenges and accelerate insight by removing the need to partition datasets and re-assemble them after analysis.
Leading enterprises and research institutes are already leveraging HPE Superdome Flex in this way in a number of large-scale AI solutions.
Use case: 16x faster cybersecurity threat discovery
AI is an important tool for cybersecurity teams working to identify unknown threats. It can help to correlate events, identify patterns, and detect anomalous behavior to improve security. However, scanning for cybersecurity threats involves traversing large network activity logs, which is both challenging and time-consuming.
The combination of Reservoir Labs ENSIGN multi-domain analytics software and HPE Superdome Flex has delivered excellent results in detecting unknown threats more quickly than traditional scale-out approaches. ENSIGN works by decomposing large datasets (e.g. network logs) into recognizable patterns of behavior, then discovering actionable insights through the process of tensor decomposition. No training or upfront specification is needed. The large in-memory compute capabilities of HPE Superdome Flex enables ENSIGN to perform these data-intensive tasks holistically, at unparalleled scale, with single-system simplicity.
In tests, ENSIGN on Superdome Flex took just 25 minutes to traverse a 32-hour activity log, compared to 8 hours on standard infrastructure running Splunk cybersecurity software. Thatโs more than 16 times faster!
Use case: high-performance storage for AI accelerators
Leading research institutes are also choosing HPE Superdome Flex as the basis of new supercomputing systems designed to accelerate AI. Both the University of Edinburgh and Pittsburgh Supercomputing Center (PSC) are combining Superdome Flex with Cerebras CS-1, an AI accelerator based on the largest processor in the industry.
EPCC, the supercomputing center at the University of Edinburgh, is using Superdome Flex as a high performance front-end storage and pre-processing solution for the Cerebras CS-1 AI supercomputer. The role of Superdome Flex is to enable:
- Application-specific pre- and post-processing of data for AI model training and inference, allowing the Cerebras CS-1s to operate at full bandwidth
- Use of large datasets in memory
The center needed to invest in technology for large-scale AI challenges, says EPCC director Mark Parsons, and working with HPE has enabled it to โexplore new and emerging technologiesโ such as Cerebras.
Use case: tackling a new class of AI problems
Pittsburgh Supercomputing Centerโs new Neocortex system also utilizes the large, shared memory of HPE Superdome Flex to provide high-performance storage for Cerebras CS-1 systems. PSC expects Neocortex to take on a new class of AI problems, which traditional GPUs are unable to handle.
โWith shared memory, you donโt have to break your problem across many nodes,โ said Nick Nystrom, Chief Scientist, Pittsburgh Supercomputing Center. โYou donโt have to write MPI, and you donโt have to distribute your data structures. Itโs just all there at high speed.โ
The HPE Superdome Flex is configured with 32 2nd Gen Intelยฎ Xeonยฎ Scalable processors, 24 TB of shared memory, 205 TB of flash storage. It connects to each CS-1 with 12 x 100 gigabit Ethernet links, providing enough bandwidth to transfer 37 HD movies every second.
โWe are immensely proud to be a part of the game-changing introduction of Neocortex, which leverages the massive computational power of the CS-1 to advance AI research,โ said Andrew Feldman, CEO and co-founder of Cerebras. โWe invented CS-1 to be the industryโs most powerful AI computer, and when coupled with HPEโs advanced memory server, it can truly accelerate and improve the future of science research.โ
Learn more
As we can see from these three use cases, no two AI workloads are quite the same. And when you are working with large, complex datasets, it is often better to store them in one large memory pool โ especially if you want to reach insight faster.
For more about how the massive shared memory of HPE Superdome Flex family can help you tackle AI problems holistically, click below.
Learn more at hpe.com/superdome
Meet HPE Blogger Diana Cortes.
Diana has spent the past 23 years working with the technologies that power the worldโs most demanding IT environments and is interested in how solutions based on those technologies impact the business. A native from Colombia, Diana holds an MBA from Georgetown University and has held a variety of regional and global roles with HPE in the US, the UK and Sweden. She is based in Stockholm, Sweden.
Connect with Diana on LinkedIn!
Server Experts
Hewlett Packard Enterprise
twitter.com/HPE_HPC
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/servers
- Back to Blog
- Newer Article
- Older Article
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
- kambizhakimi23 on: HPE extends supply chain security by adding AMD EP...
- pavement on: Tech Tip: Why you really donโt need VLANs and why ...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
132 -
HPC & SUPERCOMPUTING
132 -
Mission Critical
86 -
SMB
169