Advancing Life & Work

Memristor tech helps new device store ranges instead of bits and bytes

layerssized.jpgA group of scientists mostly from Hewlett Packard Labs have published a paper in Nature Communications that outlines a “novel analog content addressable memory (CAM) based on emerging memristor devices for fast look-up table operations.”

Tree-based machine learning performed in-memory with memristive analog CAM was authored by Giacomo Pedretti, Catherine E. Graves, Sergey Serebryakov, Xia Sheng, and Martin Foltin, all of Labs; along with Can Li and Ruibin Mao of the University of Hong Kong; and John Paul Strachan of the Peter Grunberg Insitut.

The accelerator

According to the paper “(t)ree-based machine learning techniques, such as Decision Trees and Random Forests, are top performers in several domains…(h)owever, these models are difficult to optimize for fast inference at scale without accuracy loss in von Neumann architectures due to non-uniform memory access patterns.”

To solve for this problem, lead author Pedretti and the rest of the team “propose for the first time to use the analog CAM as an in-memory computational primitive to accelerate tree-based model inference.” They used an efficient mapping algorithm leveraging the new analog CAM capabilities such that each root-to-leaf path of a Decision Tree is programmed into a row, making this process much faster.

But what the team did exactly was to design a new cell, based on the memristor.

“Thanks to this new device, instead of storing digital information we can store a continuous value,” says Pedretti. “What we store now are ranges and we give an input which is made of continuous values.”

The reason why

“Right now, explainability is the focus of our team,” says Pedretti. “Think of it as a first step towards an ‘accelerator for explainable AI.’”

According to Pedretti, tree-based algorithms are more explainable than neural networks and deep learning. Those working in artificial intelligence have been developing huge, very complex models. But when people have to use the products of those models for industry or government, they often want to know not just what, but why. Why, for example, is a given image classified as a traffic light. Knowing that in situations where proof or law is in question, can be as important as the classification itself.

“Imagine a situation in the financial or insurance industry, where you need to explain why you are not giving money to somebody,” says Pedretti. “Or in a clinical context, you may be required to explain why you gave, or withheld, a drug to a given patient.”

Labs’ CAM-accelerated, tree-based model does not just create desired product, it does so while allowing users to understand how and a specific decision was reached.

An arrival

“The way we designed the stack, from materials to circuit and architecture, based as it is on memristor technology, has allowed us to create an efficient decision-making engine,” says Pedretti. “It has also allowed us to make it in such a way that it is explainable. But in addition to that, it is good.”

What Pedretti means is that they are able to process the data in a massively parallel way, so the accelerator does not have to wait for the first-step decision in the tree to be made before moving on to the rest of the branches. They infer the likely next step, looking at the tree altogether.

The movie “Arrival” and the story it was based on, “The Story of Your Life,” are science fiction. But Fermat’s Theory of Least Time, the mathematical lynchpin of the story, is not. Nor is the idea that you may bring into practice alternate ways of looking at time, life, and certainly computer architecture, ways that do not restrict you to doing one thing as a time.

Curt Hopkins
Hewlett Packard Enterprise

0 Kudos
About the Author


Managing Editor, Hewlett Packard Labs