Advancing Life & Work
1847101 Members
5155 Online
110263 Solutions
New Article
Labs_Editorial

Labs, UCSB Paper Aims to Solve Higher-Order Optimization Problems

Optimization image HPE.jpg

One of Hewlett Packard Labs’ most successful areas of research today involves the development of hardware “accelerators” that help IT systems keep up with runaway demands for compute power and energy in the age of AI. While accelerators come in many forms and serve diverse functions, they share one characteristic: They handle specific tasks more efficiently than general purpose computers.

Given this backdrop, an engineering team at Labs embarked on a project to focus on the following challenge: reducing the required data movements between compute and memory elements while solving challenging optimization problems.  Team members realized analog devices can represent information in a more compact and energy-efficient way than digital electronic (binary) devices, and consequently decided to design dedicated analog electronic hardware for such problems. As the Labs team pursued its research, it struck a partnership with a team from the University of California Santa Barbara (UCSB) that was working on a similar initiative. The groups have now published a paper with their findings.

The paper, titled “Computing High-Degree Polynomial Gradients in Memory,” appeared in the Sept. 18 issue of the scientific journal Nature Communications. Nature Communications editors later featured the paper in the publication’s Editor’s Highlight pages, which showcase the 50 best recent papers in a given area of research.

Finding the right pathway

In mathematics a “gradient” describes the slope of a multidimensional surface. In optimization algorithms, a gradient represents a measure of “steepness” defined a different way: the sensitivity of the figure of merit with respect to changes in the input variables. Understanding where changes are steep and where they are not helps to identify potential pathways toward better solutions. Complex tasks in extremely complex projects like AI modeling could follow an exponential number of pathways, so determining the right one can save a lot of time and work.

The hardware developed by the researchers from Labs, UCSB and other partners saves a step in the computation of optimal gradients for specific tasks. Here’s how the process works.

Previously, optimization problems with large numbers of competing solutions could be solved using analog hardware – be it quantum, optical or electronic – only if these problem instances were first mapped to quadratic polynomial objective functions. The so-called systems that performed the calculations couldn’t scale up to handle “higher-order” problems in scenarios like protein folding and AI-model verification that contain more complicated interactions (i.e., beyond quadratic) between the variables. So, the extra step was required to convert calculations from one mode to another, which slowed down processing speeds.

Thomas Van VaehrenberghThomas Van Vaehrenbergh “We showed how advanced combinatorial optimization problems do not need to be preprocessed to a less efficient format with additional auxiliary variables that would slow down the problem solution,” said Thomas Van Vaerenbergh, a research engineer at Hewlett Packard Labs, and one of the nine co-authors of the paper.

In a blog published on the UCSB Engineering site, one of the paper’s lead authors said most of the currently proposed, state-of-the-art hardware cannot efficiently solve problems with a large numbers of industrially relevant optimization combinations.

“The main benefit of our hardware,” UCSB PhD student Tinish Bhattacharya said, “is that it can solve problems like Boolean Satisfiability in their native high-order space without having to do any pre-processing, potentially providing exponential speedup over current hardware architectures that are limited to second-order objective functions.”

Bringing compute closer to memory

Another key feature of the hardware is its ability to perform in-memory computing, which is an alternative computing architecture where computations are executed within the memory array itself. This helps in significantly mitigating the memory-compute bottleneck by bringing the compute closer to the memory.

Van Vaerenbergh said future research will focus on additional problem classes, as well as problems with more variables.

“There is still a big gap between our small-scaled prototype and being able to really have an impact on the problems that people care about at the scale people care about,” he said. “At some point we’ll see that our analog computer has a size constraint. We’re not going to be able to map one big problem compute onto one chip. We’re going to have to work with multiple engines talking to each other over a network. That’s what our current research is about – proving that the benefit is still there when we work with multiple engines. And if the benefit is still there, then maybe eventually we can make a product out of this.”

The “Computing High-Degree Polynomial Gradients in Memory” paper was published by Thomas Van Vaerenbergh, Giacomo Pedretti, Xia Sheng, Ray Beausoleil and Jim Ignowski of Hewlett Packard Labs; Tinish Bhattacharya, George H. Hutchinson and Dmitri B. Strukov of UCSB; and John Paul Strachan of FZJ.

0 Kudos
About the Author

Labs_Editorial