Behind the scenes @ Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

An Oral History of The Machine—Chapter Five: Hardware

Curt_Hopkins

HPE_Discover16_LV_05894.gif

Chapter Five: Hardware

 By Curt Hopkins, Managing Editor, Hewlett Packard Labs

 The Machine is a computing architecture so radically different than any which has come before that it will affect everything we do in the future. Hewlett Packard Labs has spent the last five years developing the memory-driven computing, photonics, and fabric that has gone into The Machine and which have made the impossible inevitable.

We spoke to several dozen researchers – programmers, architects, open source advocates, optical scientists, and others – to construct a nine-part oral history of the years-long process of innovating the most fundamental change in computing in 70 years.

These men and women are not only scientists, they are also compelling story tellers with an exciting history to relate. If you’re interested in how differently we will be gathering, storing, processing, retrieving, and applying information in the near future, or you just enjoy good stories about science and discovery, read on. 

If you would like to read other entries in the series, click here

CULLEN BASH

Senior Director, Platform Architecture Lab. Worked on precursor technologies.

For a functioning Memory-Driven Computing architecture, several things have to happen: Fabric that allows large pools of memory on a load-store capacity and non-volatile memory photonics to access large amounts of memory.

Currently, we build two classes of machines: Scale-out and scale-up. With scale-out, nothing is shared. Everything is in individual enclosures placed in racks, but each server is its own shared coherency domain. To communicate with other servers, it’s non-coherent.

In scale-up computing, you create a system within which everything is shared. The problem with scale-up is you can only build systems so big before they get too complex. About 64 nodes is the upper end.

The Machine is a combo of the two. It looks like a scale out, but instead of an entire cluster coherency domain, it has a pool of shared NVM. You can pull data from any machine in a family without worrying if it’s new or old data.

PAOLO FARABOSCHI

Fellow, Systems Research Lab

What we wanted to do with The Machine was to put data at the center of the IT infrastructure. To do that, we needed to change the way in which we access memory, and that is something we couldn’t do with off-the-shelf processors. We had to go look at the foundational memory access mechanisms in the heart of the compute elements for something we could leverage to get what we wanted. To change the way the processor communicates to memory, we needed connections very deep inside the processor microarchitecture.

Initially, we considered building our own application security operations center, but it rapidly dawned on us that that would be a daunting task that would take too long. In computing systems today, you cannot do everything from scratch.

So, we decided to go on a tour and talk to all the processor partners we were working with, and evaluate whether what they were planning to build had the features we needed. The partner we ultimately decided to work with was already deep in the design of a server-class processor based on the latest version of the 64-bit ARM instruction set architecture. The advantage of going down that path, was the ability to leverage their existing CPU effort, while at the same time designing a system architecture that would fit our needs, that we could turn around and use to make our Memory-Driven Computing vision real.

In computer architecture there are very, very few revolutions.

There are many incremental evolutions. I think with The Machine hardware architecture we are somewhere in-between. If you look at the core of what we have done, we have introduced the new concept of fabric-attached memory: A set of memory elements that all nodes can access, but that are not fully shared in the traditional cache-coherent sense of shared memory. You can think of this as an incremental hardware change, but we believe this is also revolutionary because it enables a completely different way to organize data, and how we access it.

Memory is the last non-shared resource in the datacenter, and we’re making it a common resource like everything else (such as processing, storage, or networking as in our Composable Infrastructure product, HPE Synergy). People are going to start referring to this as something that wasn’t there before, and it opens up several novel software and data management concepts

The next generation of The Machine will have different form factors, different ways to attach the fabric, and different ways to combine memory and compute elements. The form in its physical implementation will be different, but the basic technology is out there already, being referenced by academics and the industry.

To read the other chapters in the series, click here

  • The Machine
0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs

Events
June 6 - 8, 2017
Las Vegas, Nevada
Discover 2017 Las Vegas
Join us for HPE Discover 2017 in Las Vegas. The event will be held at the Venetian | Palazzo from June 6-8, 2017.
Read more
Each Month in 2017
Online
Software Expert Days - 2017
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all