Servers & Systems: The Right Compute
1761362 Members
3258 Online
108901 Solutions
New Article ๎ฅ‚
ComputeExperts

HPE advanced memory technology speeds insight in large-scale AI projects

In artificial intelligence projects that use vast and complex datasets, the large, shared memory of HPE Superdome Flex speeds insight by eliminating data re-assembly. Learn how AI leaders are already leveraging Superdome Flex to facilitate more robust AI projects, workloads and data.

HPE-SuperDome-Flex_AI_machine-learning_blog_small_shutterstock_1146577754.pngAlthough we often talk about artificial intelligence (AI) as a single discipline, those two letters really stand for a broad range of projects, workloads, and data.

Enterprises across the world are innovating with AI, but each is striving for different goals and using different technologies. For example, most organizations will move away from big data towards small and wide data according to Gartner. Wide data combines "a variety of small and large, unstructured, and structured data sources" to find associated links and facilitate more robust AI. And as AI evolves and diversifies, we are likely to need more diverse artificial intelligence infrastructures to meet specialized needs.

So, how can you determine what is the best platform to run your AI project on? The simple answer is โ€“ it all depends on your goals and the type of data youโ€™re dealing with.

In a variety of AI use cases โ€“ specifically when datasets are large and complex, with relationships between data elements that make partitioning difficult โ€“ it is often better to keep the dataset in one piece. This speeds up insights, as you can avoid data transport and data re-assembly after analysis.

Benefits of large, shared memory

AI-focused systems often feature high-speed interconnects, which can move large amounts of data very quickly between each processor. But in scale-out AI clusters, moving data between nodes can lead to performance bottlenecks. There is latency when data is moved between servers, and it takes time to re-assemble large and complex datasets that have been broken up and distributed across the cluster.

Chart 1.0 for AI blog.png

In these use cases, the modular, scale-up architecture of the HPE Superdome Flex family improves time to insight, and delivers enhanced insights as compared to scale-out solutions. HPE Superdome Flex can scale up from 4 to 32 sockets and from less than 1TB to 48 TB of shared memory, to meet the largest in-memory AI needs in your data center. HPE Superdome Flex 280 scales up from 2 to 8 sockets and from 64 GB to 24 TB of shared memory, using DRAM only or in combination with persistent memory. With these platforms you can solve key challenges and accelerate insight by removing the need to partition datasets and re-assemble them after analysis.

Leading enterprises and research institutes are already leveraging HPE Superdome Flex in this way in a number of large-scale AI solutions.

Use case: 16x faster cybersecurity threat discovery

AI is an important tool for cybersecurity teams working to identify unknown threats. It can help to correlate events, identify patterns, and detect anomalous behavior to improve security. However, scanning for cybersecurity  threats  involves  traversing  large  network  activity  logs, which  is  both challenging and time-consuming.

The combination of Reservoir Labs ENSIGN multi-domain analytics software and HPE Superdome Flex has delivered excellent results in detecting unknown threats more quickly than traditional scale-out approaches. ENSIGN works by decomposing large datasets (e.g. network logs) into recognizable patterns of behavior, then discovering actionable insights through the process of tensor decomposition. No training or upfront specification is needed. The large in-memory compute capabilities of HPE Superdome Flex enables ENSIGN to perform these data-intensive tasks holistically, at unparalleled scale, with single-system simplicity.

Chart 2.0 for AI blog.pngIn tests, ENSIGN on Superdome Flex took just 25 minutes to traverse a 32-hour activity log, compared to 8 hours on standard infrastructure running Splunk cybersecurity software. Thatโ€™s more than 16 times faster!

Use case: high-performance storage for AI accelerators

Leading research institutes are also choosing HPE Superdome Flex as the basis of new supercomputing systems designed to accelerate AI. Both the University of Edinburgh and Pittsburgh Supercomputing Center (PSC) are combining Superdome Flex with Cerebras CS-1, an AI accelerator based on the largest processor in the industry.

EPCC, the supercomputing center at the University of Edinburgh, is using Superdome Flex as a high performance front-end storage and pre-processing solution for the Cerebras CS-1 AI supercomputer. The role of Superdome Flex is to enable:

  • Application-specific pre- and post-processing of data for AI model training and inference, allowing the Cerebras CS-1s to operate at full bandwidth
  • Use of large datasets in memory

The center needed to invest in technology for large-scale AI challenges, says EPCC director Mark Parsons, and working with HPE has enabled it to โ€œexplore new and emerging technologiesโ€ such as Cerebras.

Use case: tackling a new class of AI problems

Pittsburgh Supercomputing Centerโ€™s new Neocortex system also utilizes the large, shared memory of HPE Superdome Flex to provide high-performance storage for Cerebras CS-1 systems. PSC expects Neocortex to take on a new class of AI problems, which traditional GPUs are unable to handle.

โ€œWith shared memory, you donโ€™t have to break your problem across many nodes,โ€ said Nick Nystrom, Chief Scientist, Pittsburgh Supercomputing Center. โ€œYou donโ€™t have to write MPI, and you donโ€™t have to distribute your data structures. Itโ€™s just all there at high speed.โ€

The HPE Superdome Flex is configured with 32 2nd Gen Intelยฎ Xeonยฎ Scalable processors, 24 TB of shared memory, 205 TB of flash storage. It connects to each CS-1 with 12 x 100 gigabit Ethernet links, providing enough bandwidth to transfer 37 HD movies every second.

โ€œWe are immensely proud to be a part of the game-changing introduction of Neocortex, which leverages the massive computational power of the CS-1 to advance AI research,โ€ said Andrew Feldman, CEO and co-founder of Cerebras. โ€œWe invented CS-1 to be the industryโ€™s most powerful AI computer, and when coupled with HPEโ€™s advanced memory server, it can truly accelerate and improve the future of science research.โ€

Learn more

As we can see from these three use cases, no two AI workloads are quite the same. And when you are working with large, complex datasets, it is often better to store them in one large memory pool โ€“ especially if you want to reach insight faster.

For more about how the massive shared memory of HPE Superdome Flex family can help you tackle AI problems holistically, click below.

Learn more at hpe.com/superdome

Meet HPE Blogger Diana Cortes.

Diana-Cortes_headshot_1516285006401.jpgDiana has spent the past 23 years working with the technologies that power the worldโ€™s most demanding IT environments and is interested in how solutions based on those technologies impact the business. A native from Colombia, Diana holds an MBA from Georgetown University and has held a variety of regional and global roles with HPE in the US, the UK and Sweden. She is based in Stockholm, Sweden.

Connect with Diana on LinkedIn!

Server Experts
Hewlett Packard Enterprise

twitter.com/HPE_HPC
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/servers

0 Kudos
About the Author

ComputeExperts

Our team of Hewlett Packard Enterprise server experts helps you to dive deep into relevant infrastructure topics.