Behind the scenes @ Labs
Showing results for 
Search instead for 
Do you mean 

An Oral History of The Machine—Chapter Seven: A funny thing happened on the way to The Machine

Curt_Hopkins on ‎11-22-2016 08:56 AM a month ago

macdemo.gif

Chapter Seven: A funny thing happened on the way to The Machine

By Curt Hopkins, Managing Editor, Hewlett Packard Labs

 The Machine is a computing architecture so radically different than any which has come before that it will affect everything we do in the future. Hewlett Packard Labs has spent the last five years developing the memory-driven computing, photonics, and fabric that has gone into The Machine and which have made the impossible inevitable.

We spoke to several dozen researchers – programmers, architects, open source advocates, optical scientists, and others – to construct a nine-part oral history of the years-long process of innovating the most fundamental change in computing in 70 years.

These men and women are not only scientists, they are also compelling story tellers with an exciting history to relate. If you’re interested in how differently we will be gathering, storing, processing, retrieving, and applying information in the near future, or you just enjoy good stories about science and discovery, read on. 

If you would like to read other entries in the series, click here

APRIL SLAYDEN MITCHELL

Director, Programmability and Analytics Workloads

Jaap Suermondt and I were in the Executive Briefing Center in Palo Alto talking to customers about The Machine. At one point, we were presenting to a financial services customer and we said, “Let’s float Spark.” So we presented our findings to them – how we had been able to use Machine technology to increase the speed of Spark by 10x-20x – and they went crazy. Martin saw that and said, “Let’s get this out – let’s find a partner.”

We had found that our modified Spark had not just run better on The Machine, but on hardware that’s available now. This led to our looking at other products achieving better on current technology.

A “spinoff” is usually just the transfer, from prototype to development. But we are aiming at something further, but finding along the way that technologies are being created that can be useful now.

When it comes to The Machine, everything is a test. There are no examples or hypotheses you can rely on. We want to have these aha! moments, prior to product. Although we’re researchers, delivering real products to market can be very fulfilling too.

JUN LI

Distinguished Technologist, Software and Analytics Lab

Spark for The Machine

Right after The Machine program started, we were charged with investigating what could be done with minimum code changes to help create an ecosystem for The Machine. So we asked the question:

“Can I make today’s open source software run on The Machine?”

When we first tried Apache Hadoop – Hadoop File System and MapReduce – in 2013, there was very little improvement, and so we came to the conclusion that if you want high performance from Memory-Driven Computing, you need to change the software architecture significantly. Making the low-level software faster on its own often isn’t enough to make the whole application run faster. If the higher-level software layers are still written for a disk-based world, those performance gains will be lost.

We chose Apache Spark as our next platform for investigation. With Apache Spark we had a much smaller architectural mismatch than we had with other programs we tried, because it is designed for in-memory computing. Furthermore, many architectural modules in Spark have been designed to support different implementations by conforming to a pre-defined set of interfaces.

We made architectural changes in particular to the shuffle engine and object caching. Between October 2014 and November 2015, we were able to achieve a 15x improvement for a representative large-scale graph analytics workload when running the application on an HPE Superdome X system.

At that point, we realized that our enhanced version of Spark could be its own product. Now we’re working to open source it with Hortonworks.

Our modules have been packaged as a dynamically-loadable package that the customer application can load on demand at runtime. The success of this process now sees us applying the memory-driven techniques that we have developed to a large-scale security analytics application framework.

NIGEL EDWARDS

Distinguished Technologist, Security and Manageability Lab. Involved with a cross-company initiative to improve relevance with developers.

Secure Containers

I was involved with a cross-company initiative looking at what technology developers were using and how HPE could maintain and improve our relevance. I kept hearing people talk about container technology. So, I focused on robust secure environment for containers, looking for ways to create isolation between containers, the same way we use isolation between virtual machines, to substantially improve security.

In April of 2015, I started to collaborate with my colleagues at the Security and Manageability Lab in Bristol – Chris Dalton, and Rych Hawkes. Subsequently the collaboration broadened to include colleagues in the HPE storage and Linux businesses. We developed Secure Containers independently of The Machine. However, this looked naturally to be relevant to the Machine.

Secure Containers was not a spinoff, more a spin-in to The Machine

.

We demonstrated some of the functions and features at Discover 2016 in Las Vegas and again at DockerCon in Seattle in June 2016. Currently, it has three main components: a hardened Linux kernel (which removes a major attack vector), running on the HPE Adaptive File System (48 nodes, several thousand containers), visualized in the Loom management engine, to render relationships between containers and artifacts across the whole stack (disk spindles to applications).

A key component is a modification of the Linux kernel which will be open source. On top of this, we will provide manageability and continually-secured execution for containers. It’s likely to make its way to customers one way or another, possibly through Software Defined Infrastructures, or through The Machine.

AMIT SHARMA

Director, Memristor Program. Started at Hewlett Packard when Hewlett and Packard were still around.

NVM-DIMMs

A standard server has a specific industry-standard form factor and well-defined interfaces for memory, in the dual in-line memory module (DIMM). By putting HPE’s Memristor-based non-volatile memory (NVM) into a DIMM form factor and implementing compatible interfaces, every single server we ship will be able to provide NVM.

We’ve already started to build DIMMs that are not Memristor-based, but instead use a combo of DRAM and Flash to provide limited non-volatility in our HPE Persistent Memory line. You won’t need to buy The Machine to access HPE’s Memristor innovation. This is to prepare the ecosystem, to get developers to start thinking differently in advance of The Machine.

To read the other chapters in the series, click here

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs

Events
27 Feb - 2 March 2017
Barcelona | Fira Gran Via
Mobile World Congress 2017
Hewlett Packard Enterprise at Mobile World Congress 2017, Barcelona | Fira Gran Via Location: Hall 3, Booth 3E11
Read more
Each Month in 2017
Online
Software Expert Days - 2017
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all