Behind the scenes at Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

An Oral History of The Machine—Chapter Eight: Public poetry and private mythology

Curt_Hopkins

DSC_7984.gif

Chapter Eight: Public poetry and private mythology

By Curt Hopkins, Managing Editor, Hewlett Packard Labs

The Machine is a computing architecture so radically different than any which has come before that it will affect everything we do in the future. Hewlett Packard Labs has spent the last five years developing the memory-driven computing, photonics, and fabric that has gone into The Machine and which have made the impossible inevitable.

We spoke to several dozen researchers – programmers, architects, open source advocates, optical scientists, and others – to construct a nine-part oral history of the years-long process of innovating the most fundamental change in computing in 70 years.

These men and women are not only scientists, they are also compelling story tellers with an exciting history to relate. If you’re interested in how differently we will be gathering, storing, processing, retrieving, and applying information in the near future, or you just enjoy good stories about science and discovery, read on. 

If you would like to read other entries in the series, click here

RICHARD LEWINGTON

Technical Communicator, Hewlett Packard Labs. Worked as Fink’s communications manager at Labs.

An OK on risk

When Martin Fink announced The Machine in 2014, it felt like a renaissance. We felt we had a vision and were not just implementing an industry-standard component or changing the color of a server.

This was the reappearance of it being OK to take some risk.

We had really no one tell us it was a waste of time and money and we heard almost nothing negative when we announced it.

Externally, the press was guardedly positive with a little skepticism about HP being able to execute such a huge project. All the tech blogs covered it and we got some great numbers from a short animation explaining the project because it was embedded in an IFLScience post.

Dell’s CEO said no one’s going to rewrite their code for something like this, which we naturally took to mean we were on the right track! And IBM started using some of the same phrases we were using to describe The Machine. Academically, the reaction was very positive. Customers, generally speaking, responded very, very positively. What we got from them was an outpouring of things they wanted to see done, of problems they couldn’t see how to address with current technology.

One of the things we got dinged on was that The Machine was so far from commercialization when we first announced the program, but Martin felt it was essential to plant the seeds really early and communicate our vision to the industry.

STAN WILLIAMS

HPE Senior Fellow, Director of Foundational Technologies at Hewlett Packard Labs. Worked on The Machine’s foundation technologies.

We understood the flaws

When we were developing Memristor, we understood the issues of flaws before things went to manufacturing. We even introduced it in an IEEE spectrum paper, "How we found the missing memristor.”

Yes, these devices work because of defects, but we were intentionally introducing defects into devices. Initially, it wasn’t working well, so I asked Doug Volver to intentionally put defects into them. I theorized that what would cause the devices to switch was defects in the device. He kept at it, and we saw it work before we sent it to be manufactured.

We understood that what to most people would be a flaw and ruin a transistor was actually making the Memristor device work. Scientific understanding was what opened up doors and made things go. There was a lot of resistance from the technical community about this. Others had their favorite theories on why Memristors worked, but in the end the mechanism we proposed was correct.

Development is much, much harder than research.

You get your proof of concept and you dust off your hands. But development is the reliable and inexpensive production of theory. Our original partner for the development of the Memristor was, in the end, the wrong choice. Our new partnership with SanDisk is allowing us to move forward again.

We had been making Memristors for hundreds of years and hadn’t understood them. In the late Nineties, in molecular electronics we were trapping molecules in crossbars. The problem was molecules are fragile. They burned up, fell apart, and got eaten when current ran through them.

Our biggest contribution was in recognizing what a Memristor actually was.

ANDREW WHEELER

HPE Fellow, Deputy Director of Hewlett Packard Labs

Spinning up a new partnership

Before the announcement, everyone had a mission, but it was a loosely-coupled set of research projects. Suddenly, we had a target, goal, and date in mind. We had to get a plan in place that got us there. We weren’t there to do research and write a few papers, we had to mobilize.

When it came to hardware, we received a budget to develop a central tenet of The Machine architecture, the persistent memory pool, centered on the Memristor program. When I came on, we had been collaborating a partner We had working devices in the lab and a good partnership.

But in that next year, it became clear that our partnership wasn’t quite working on the timetable we envisioned. Between cultural dynamics, geographic distance, and other issues, we had to go and find a different partnership, and spin that up.

So now we’ve partnered with SanDisk (now Western Digital), and when it comes to the business and technical alignment for developing this new memory, we couldn’t have a better partner.

PAOLO FARABOSCHI

Fellow, Systems Research Lab

Revelation as evolution

The hardware prototype of The Machine was initially conceived as way to combine Memristor, photonics, and system-on-chip technologies, many of which we had already been working on for several years.

A cornerstone of the design was our Memristor technology, which up to that point we had developed in collaboration with SK Hynix. Many of the architecture choice points were selected based on the expected characteristics of that device. In spring of 2015, it became obvious that some of the obstacles in device manufacturing were more severe than we anticipated, and the relationship with SK Hynix was not producing the results we were hoping for.

So, while our memory team was trying to identify alternative paths for Memristor, in the architecture team we started redesigning The Machine with as many different Memristor device options as we could come up with. So, we went back to our lab and started whiteboarding the problem. We mapped a decision tree with as many variants as we could think of: The Memristor happens, it doesn’t happen; we use DRAM instead, we use Flash; what if the density/speed/energy profile of the device is this or that? And so on.

We turned that whiteboard into a presentation which we gave to Martin in the Lambda conference room in Palo Alto sometime in mid-2015. We also had the VLSI design group, responsible for the chipset, from Ft. Collins on the screen as well.

What came out of that discussion was that we had underestimated the importance of how we were building The Machine. Once we realized that persistence and non-volatility are orthogonal to one another, we started to understand the core value of a fabric-attached memory architecture.

If you think about it, even DRAM (which is inherently volatile) when you attach it to a fabric that is outside of the CPU failure domain, remains persistent to all processor failures, whether hardware or software.

For us, this was the point when The Machine went from being a prototype proof point for underlying device technologies (like Memristor) to being a first-class architecture.

The Machine was not just a collection of new kinds of devices. It was the architecture itself that was new, and innovative.

 To read the other chapters in the series, click here

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs