The Cloud Experience Everywhere
1745788 Members
4103 Online
108722 Solutions
New Article
Sean_Sargent

Memory-Driven Computing: The Perfect Answer to Compute’s Perfect Storm

For years now, compute technologies have been sailing into a couple of strong headwinds. First, there’s the well-publicized deceleration of advances in the semiconductor industry in apparent contradiction to Moore’s Law, leading many to predict the “death” of that law. Then there’s the exponential growth of data that’s being generated, and particularly the explosion of unstructured data. That’s only going to increase as technologies evolve and enterprises move from leveraging Systems of Record and Systems of Engagement and evolve to using Systems of Action, with organizations pulling in vast quantities of data to understand and manipulate the real world via the Internet of Things. Put those two headwinds together and you’re looking at a perfect storm on IT’s horizon.

If there’s one thing that the history of IT has taught us, however, it’s that you should never underestimate the ingenuity of design. New compute architectures are about to accelerate past the turbulence with breathtaking innovations that will take us to places we’ve hardly even imagined up to now. What’s more, some of those breakthroughs are already coming to market and delivering very significant benefits right now.

Innovation at the speed of light – literally

Things can move with mind-bending speed in IT, as we all know. Artificial intelligence was pretty exotic just a couple of years ago and a dusty branch of IT academia for many years before that … since then, it has exploded into our lives in ways we’ve all experienced.

pexels-photo-373543 smaller.jpg

HPE has been rethinking the standard compute architecture defined by John von Neumann where processing and non-persistent memory are deployed in very close proximity and interconnects between processors, memory and storage are copper-based. HPE’s vision and development has been well documented and is referred to as Memory-Driven Computing.

When I consider Memory-Driven Computing I think of three things. The first is the availability of large quantities of persistent memory to replace the bottlenecks and latency incurred by moving data to storage.  The second area is ensuring we can provide the optimized compute specific to each workload. The third is ensuring that we can interconnect persistent memory with optimized processing power across chassis, racks and isles within the data center – for this, copper cannot be used, and instead we need to use technologies such as photonics.

You’ve no doubt been hearing about the exciting possibilities around replacing electrons and copper with light and optic fibers for years. But what you may not know is that a couple of years ago HPE announced a prototype architecture that replaced standard nodes with photonics interconnects to transmit more data, faster, while using less power and space.

HPE’s revolutionary new paradigm, Memory-Driven Computing, encompasses innovations in non-volatile memory, fabric, software and security. Memory-Driven Computing is the focus of The Machine, the biggest and most complex research project in the history of the company. As HPE’s CEO Antonio Neri put it in a blog post last year (he was our EVP and GM at the time), The Machine “was born of the realization that we’re asking today’s computers to complete tasks that no one could have imagined 20 years ago, let alone 60 years ago at the dawn of the computer age.” Memory-Driven Computing, Antonio explained, “will redefine how computers – from smartphones to supercomputers – work and what they’re capable of. In today’s computing systems, memory and storage are entirely separate and accessed by the processor when needed. In fact, as much as 90 percent of the work in today’s computing systems is devoted to moving information between tiers of memory and storage. With Memory-Driven Computing, we’ve eliminated those layers.”

One of the early fruits of this effort is HPE’s Persistent Memory product category. In 2016, we released the first non-volatile DIMMs (NVDIMMs) designed around a server platform, combining the speed of DRAM with the resilience of flash storage in our HPE ProLiant DL360 and DL380 Gen9 servers.

In November last year, we unveiled the HPE Superdome Flex mission-critical x86 server, designed with Memory-Driven Computing principles that offer unique advantages for data-intensive workloads.

Superdome flex.jpg

HPE is also advancing Memory-Driven Computing in its role as a founding member of the Gen-Z Consortium, an organization made up of leading computer industry companies and dedicated to developing an open-standard interconnect for high-speed, low-latency, memory-semantic access to data and devices. In February, the Consortium announced that its Gen-Z Core Specification 1.0 is publicly available at its website. The memory-centric, standards-based Specification will enable silicon providers and IP developers to start developing products enabling Gen-Z technology solutions.

The time for payoff is right now

Today, HPE Memory-Driven Compute is helping DZNE, a research institution established by the German Federal Ministry of Education and Research, in its fight against neurodegenerative diseases such as Alzheimer’s. DZNE and HPE researchers worked together to adapt DNZE’s algorithm for pre-processing massive amounts of genomics data to use Memory-Driven programming techniques. They were able to reduce a 22-minute process to around two and a half minutes and then, with some additional code changes, to just 13 seconds.

Memory-Driven Compute is heading for a major inflection point as we take performance gains like these and make them replicable. We see a pattern evolving as we take customers on that journey. The first step is usually an assessment that looks at the application and identifies any bottlenecks. Then it’s possible to simply port the application as-is to new large memory systems such as HPE Superdome Flex. By simply introducing in-memory based file systems to the application, the change becomes basically invisible to the app. We're seeing as much as a 10x improvement in performance without any code changes, just by treating memory as if it were storage.

Then you can take it further with some form of performance profiling undertaken within the application. For example, you can look for read and write codes that typically go to storage, and reprogram them to go directly to memory. You’ll need to do some refactoring of the application, but the result can be 100x speed improvements.

A steady pilot for turbulent waters

The processes in terms of porting and optimization are methodical and standardized; what is new is the sheer power of the memory-driven architectures that they’re leveraging. HPE Pointnext can partner with you to exploit the same patterns and deliver impressive results for your organization. We have the right expertise to help you steer confidently through whatever IT squalls the future may bring.

We invite you to partner with us, too, to help create the future of compute. Check out the Machine User Group, our community of developers, technologists and industry experts interested in Memory-Driven Computing. You’ll find developer toolkits, training and workshops, and social forums. You can become part of The Machine User Group here.

Featured articles:

 

About the Author

Sean_Sargent

Chief Architect with 20 years industry experience working for leading IT organizations developing and future-proofing worldwide consulting portfolio strategies and capabilities, with a focus on emerging technologies.