HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Behind the scenes @ Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

Labs’ Cat Graves gives supercomputing experts the lowdown on the Dot Product Engine (UPDATED: VIDEO)

Curt_Hopkins

Giant Cat.jpegCat Graves (photo by Rebecca Lewington)

By Curt Hopkins, Managing Editor, Hewlett Packard Labs

 Labs research scientist Cat Graves gave an invited talk this month at the SC17 supercomputing conference in Denver this month entitled “Computing with Physics: Analog Computation and Neural Network Classification with a Dot Product Engine.”

Supercomputers these days are addressing what Graves called “huge scale computational physics problems,” like simulating catalysis or genome transcription assembly or large scale geological events like earthquakes. Many of these systems are so large and issue such enormous energy demands that they require national laboratories or large corporations to support them. Yet, the costs are such that these groups are “unable to push the value on the computation.” In other words, “twice as many computational resources don’t translate to twice as much computation anymore.”

The bottlenecks in memory and computation are slowing down the efficiency of supercomputers.

As an example of this trend, Graves held up D.E. Shaw Research’s Anton computer, a 512-node supercomputer using a 3D torus chip construction for one job only: Running molecular dynamics software to map protein folding or model drug interactions. There are a grand total of three of these machines, including the faster Anton 2.

The problem with this type of special purpose supercomputing, says Graves, is, in addition to the costs, a loss of flexibility, portability, and re-programmability.

“The idea of Memory-Driven Computing and one goal we’re working toward at Labs,” says Graves, “is to create an ecosystem that encourages the development and use of computational accelerators that are also flexible, which scale edge to data center to exascale. This ecosystem will allow the user to select whichever specific accelerator you wish and just slot it in.”

What Graves is working on as part of the Rebooting Computing team is “flexible computational accelerators as good as or preferably better than if you had designed it in a digital ASIIC, a tall order! We’re approaching it by going beyond CMOS to take advantage of novel device behavior to get there.” One key novel device technology the team is leveraging are memristors, with their inherently analog nature.

The Dot Product Engine (DPE) is the first accelerator the team has created. The DPE accelerates the core mathematical computation of matrix multiplication, a ubiquitous operation in machine learning, AI, signal processing applications, and more. In particular, convolutional neural networks, such as those used for image classification, compute matrix multiplications for about 90 percent of their overall program. With the DPE, HPE’s Rebooting Computing team has shown image recognition algorithms can be sped up to 15 times faster than the leading state-of-the-art digital ASIC – a customized CMOS chip for just this operation – while achieving reprogramability and flexibility in the DPE accelerator.

With Memory Driven computing architecture, the Dot Product Engine, and other accelerators, can provide a low power-high performance alternative to special-purpose heterogeneous supercomputing.

“The next stage,” says Graves, is “to move all the electronic arrays that control the Memristors from printed circuit boards chilling out next to the probe station onto the chip.”

The end result will hopefully be an ecosystem that prioritizes adaptability, cost-savings, and energy conservation without giving up computing power. Labs has certainly proven the idea is practical and taken a very exciting first step.

About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs

Comments
dubina


I wonder if anyone could explain the difference(s) between memristor arrays and the phase change Memory (PCM) described below.


"Intel Corporation and Numonyx B.V. today announced a key breakthrough in the research of phase change memory ("PCM"), a new non-volatile memory technology that combines many of the benefits of today's various memory types. For the first time, researchers have demonstrated a 64Mb test chip that enables the ability to stack, or place, multiple layers of PCM arrays within a single die. These findings pave the way for building memory devices with greater capacity, lower power consumption and optimal space savings for random access non-volatile memory and storage applications."

I am confused by sime of the similarities and I would like to understand them better and be able to sort them out to some extent.  Thanks.

martina_trucco

Hi @dubina - thanks for your comment! To get a fantastic overview of this subject directly from the source himself, may I suggest watching The Chua Lectures series that was hosted by Hewlett Packard Labs in 2016. You can watch all the replays here: https://www.youtube.com/playlist?list=PLtS6YX0YOX4eAQ6IrOZSta3xjRXzpcXyi

And there's also some featured recaps of the series on our own Behind the Scenes blog that you may want to read:

https://community.hpe.com/t5/tag/Chua/tg-p

Hope that helps!

- Martina Trucco, Hewlett Packard Labs

Events
Nov 27 - 29
Madrid, Spain
HPE Discover 2018 Madrid
Learn about all things HPE Discover 2018 in Madrid, Spain, 27 - 29 November, 2018.
Read more
See posts for
dates/locations
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
View all