- Community Home
- >
- Services
- >
- The Cloud Experience Everywhere
- >
- Memory-Driven Computing: The Perfect Answer to Com...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Memory-Driven Computing: The Perfect Answer to Compute’s Perfect Storm
For years now, compute technologies have been sailing into a couple of strong headwinds. First, there’s the well-publicized deceleration of advances in the semiconductor industry in apparent contradiction to Moore’s Law, leading many to predict the “death” of that law. Then there’s the exponential growth of data that’s being generated, and particularly the explosion of unstructured data. That’s only going to increase as technologies evolve and enterprises move from leveraging Systems of Record and Systems of Engagement and evolve to using Systems of Action, with organizations pulling in vast quantities of data to understand and manipulate the real world via the Internet of Things. Put those two headwinds together and you’re looking at a perfect storm on IT’s horizon.
If there’s one thing that the history of IT has taught us, however, it’s that you should never underestimate the ingenuity of design. New compute architectures are about to accelerate past the turbulence with breathtaking innovations that will take us to places we’ve hardly even imagined up to now. What’s more, some of those breakthroughs are already coming to market and delivering very significant benefits right now.
Innovation at the speed of light – literally
Things can move with mind-bending speed in IT, as we all know. Artificial intelligence was pretty exotic just a couple of years ago and a dusty branch of IT academia for many years before that … since then, it has exploded into our lives in ways we’ve all experienced.
HPE has been rethinking the standard compute architecture defined by John von Neumann where processing and non-persistent memory are deployed in very close proximity and interconnects between processors, memory and storage are copper-based. HPE’s vision and development has been well documented and is referred to as Memory-Driven Computing.
When I consider Memory-Driven Computing I think of three things. The first is the availability of large quantities of persistent memory to replace the bottlenecks and latency incurred by moving data to storage. The second area is ensuring we can provide the optimized compute specific to each workload. The third is ensuring that we can interconnect persistent memory with optimized processing power across chassis, racks and isles within the data center – for this, copper cannot be used, and instead we need to use technologies such as photonics.
You’ve no doubt been hearing about the exciting possibilities around replacing electrons and copper with light and optic fibers for years. But what you may not know is that a couple of years ago HPE announced a prototype architecture that replaced standard nodes with photonics interconnects to transmit more data, faster, while using less power and space.
HPE’s revolutionary new paradigm, Memory-Driven Computing, encompasses innovations in non-volatile memory, fabric, software and security. Memory-Driven Computing is the focus of The Machine, the biggest and most complex research project in the history of the company. As HPE’s CEO Antonio Neri put it in a blog post last year (he was our EVP and GM at the time), The Machine “was born of the realization that we’re asking today’s computers to complete tasks that no one could have imagined 20 years ago, let alone 60 years ago at the dawn of the computer age.” Memory-Driven Computing, Antonio explained, “will redefine how computers – from smartphones to supercomputers – work and what they’re capable of. In today’s computing systems, memory and storage are entirely separate and accessed by the processor when needed. In fact, as much as 90 percent of the work in today’s computing systems is devoted to moving information between tiers of memory and storage. With Memory-Driven Computing, we’ve eliminated those layers.”
One of the early fruits of this effort is HPE’s Persistent Memory product category. In 2016, we released the first non-volatile DIMMs (NVDIMMs) designed around a server platform, combining the speed of DRAM with the resilience of flash storage in our HPE ProLiant DL360 and DL380 Gen9 servers.
In November last year, we unveiled the HPE Superdome Flex mission-critical x86 server, designed with Memory-Driven Computing principles that offer unique advantages for data-intensive workloads.
HPE is also advancing Memory-Driven Computing in its role as a founding member of the Gen-Z Consortium, an organization made up of leading computer industry companies and dedicated to developing an open-standard interconnect for high-speed, low-latency, memory-semantic access to data and devices. In February, the Consortium announced that its Gen-Z Core Specification 1.0 is publicly available at its website. The memory-centric, standards-based Specification will enable silicon providers and IP developers to start developing products enabling Gen-Z technology solutions.
The time for payoff is right now
Today, HPE Memory-Driven Compute is helping DZNE, a research institution established by the German Federal Ministry of Education and Research, in its fight against neurodegenerative diseases such as Alzheimer’s. DZNE and HPE researchers worked together to adapt DNZE’s algorithm for pre-processing massive amounts of genomics data to use Memory-Driven programming techniques. They were able to reduce a 22-minute process to around two and a half minutes and then, with some additional code changes, to just 13 seconds.
Memory-Driven Compute is heading for a major inflection point as we take performance gains like these and make them replicable. We see a pattern evolving as we take customers on that journey. The first step is usually an assessment that looks at the application and identifies any bottlenecks. Then it’s possible to simply port the application as-is to new large memory systems such as HPE Superdome Flex. By simply introducing in-memory based file systems to the application, the change becomes basically invisible to the app. We're seeing as much as a 10x improvement in performance without any code changes, just by treating memory as if it were storage.
Then you can take it further with some form of performance profiling undertaken within the application. For example, you can look for read and write codes that typically go to storage, and reprogram them to go directly to memory. You’ll need to do some refactoring of the application, but the result can be 100x speed improvements.
A steady pilot for turbulent waters
The processes in terms of porting and optimization are methodical and standardized; what is new is the sheer power of the memory-driven architectures that they’re leveraging. HPE Pointnext can partner with you to exploit the same patterns and deliver impressive results for your organization. We have the right expertise to help you steer confidently through whatever IT squalls the future may bring.
We invite you to partner with us, too, to help create the future of compute. Check out the Machine User Group, our community of developers, technologists and industry experts interested in Memory-Driven Computing. You’ll find developer toolkits, training and workshops, and social forums. You can become part of The Machine User Group here.
Featured articles:
- An IT analyst's review of The Machine
- Is AI the magic bullet for your company's data glut?
- What does it take to build a machine learning capacity? Less than you think
- Want to know the future of technology? Sign up for weekly insights and resources.
- Back to Blog
- Newer Article
- Older Article
- Deeko on: The right framework means less guesswork: Why the ...
- MelissaEstesEDU on: Propel your organization into the future with all ...
- Samanath North on: How does Extended Reality (XR) outperform traditio...
- Sarah_Lennox on: Streamline cybersecurity with a best practices fra...
- Jams_C_Servers on: Unlocking the power of edge computing with HPE Gre...
- Sarah_Lennox on: Don’t know how to tackle sustainable IT? Start wit...
- VishBizOps on: Transform your business with cloud migration made ...
- Secure Access IT on: Protect your workloads with a platform agnostic wo...
- LoraAladjem on: A force for good: generative AI is creating new op...
- DrewWestra on: Achieve your digital ambitions with HPE Services: ...