Advancing Life & Work
1821414 Members
2625 Online
109633 Solutions
New Article ๎ฅ‚
Labs_Editorial

Labs, UCSB Paper Aims to Solve Higher-Order Optimization Problems

Optimization image HPE.jpg

One of Hewlett Packard Labsโ€™ most successful areas of research today involves the development of hardware โ€œacceleratorsโ€ that help IT systems keep up with runaway demands for compute power and energy in the age of AI. While accelerators come in many forms and serve diverse functions, they share one characteristic: They handle specific tasks more efficiently than general purpose computers.

Given this backdrop, an engineering team at Labs embarked on a project to focus on the following challenge: reducing the required data movements between compute and memory elements while solving challenging optimization problems.  Team members realized analog devices can represent information in a more compact and energy-efficient way than digital electronic (binary) devices, and consequently decided to design dedicated analog electronic hardware for such problems. As the Labs team pursued its research, it struck a partnership with a team from the University of California Santa Barbara (UCSB) that was working on a similar initiative. The groups have now published a paper with their findings.

The paper, titled โ€œComputing High-Degree Polynomial Gradients in Memory,โ€ appeared in the Sept. 18 issue of the scientific journal Nature Communications. Nature Communications editors later featured the paper in the publicationโ€™s Editorโ€™s Highlight pages, which showcase the 50 best recent papers in a given area of research.

Finding the right pathway

In mathematics a โ€œgradientโ€ describes the slope of a multidimensional surface. In optimization algorithms, a gradient represents a measure of โ€œsteepnessโ€ defined a different way: the sensitivity of the figure of merit with respect to changes in the input variables. Understanding where changes are steep and where they are not helps to identify potential pathways toward better solutions. Complex tasks in extremely complex projects like AI modeling could follow an exponential number of pathways, so determining the right one can save a lot of time and work.

The hardware developed by the researchers from Labs, UCSB and other partners saves a step in the computation of optimal gradients for specific tasks. Hereโ€™s how the process works.

Previously, optimization problems with large numbers of competing solutions could be solved using analog hardware โ€“ be it quantum, optical or electronic โ€“ only if these problem instances were first mapped to quadratic polynomial objective functions. The so-called systems that performed the calculations couldnโ€™t scale up to handle โ€œhigher-orderโ€ problems in scenarios like protein folding and AI-model verification that contain more complicated interactions (i.e., beyond quadratic) between the variables. So, the extra step was required to convert calculations from one mode to another, which slowed down processing speeds.

Thomas Van VaehrenberghThomas Van Vaehrenbergh โ€œWe showed how advanced combinatorial optimization problems do not need to be preprocessed to a less efficient format with additional auxiliary variables that would slow down the problem solution,โ€ said Thomas Van Vaerenbergh, a research engineer at Hewlett Packard Labs, and one of the nine co-authors of the paper.

In a blog published on the UCSB Engineering site, one of the paperโ€™s lead authors said most of the currently proposed, state-of-the-art hardware cannot efficiently solve problems with a large numbers of industrially relevant optimization combinations.

โ€œThe main benefit of our hardware,โ€ UCSB PhD student Tinish Bhattacharya said, โ€œis that it can solve problems like Boolean Satisfiability in their native high-order space without having to do any pre-processing, potentially providing exponential speedup over current hardware architectures that are limited to second-order objective functions.โ€

Bringing compute closer to memory

Another key feature of the hardware is its ability to perform in-memory computing, which is an alternative computing architecture where computations are executed within the memory array itself. This helps in significantly mitigating the memory-compute bottleneck by bringing the compute closer to the memory.

Van Vaerenbergh said future research will focus on additional problem classes, as well as problems with more variables.

โ€œThere is still a big gap between our small-scaled prototype and being able to really have an impact on the problems that people care about at the scale people care about,โ€ he said. โ€œAt some point weโ€™ll see that our analog computer has a size constraint. Weโ€™re not going to be able to map one big problem compute onto one chip. Weโ€™re going to have to work with multiple engines talking to each other over a network. Thatโ€™s what our current research is about โ€“ proving that the benefit is still there when we work with multiple engines. And if the benefit is still there, then maybe eventually we can make a product out of this.โ€

The โ€œComputing High-Degree Polynomial Gradients in Memoryโ€ paper was published by Thomas Van Vaerenbergh, Giacomo Pedretti, Xia Sheng, Ray Beausoleil and Jim Ignowski of Hewlett Packard Labs; Tinish Bhattacharya, George H. Hutchinson and Dmitri B. Strukov of UCSB; and John Paul Strachan of FZJ.

0 Kudos
About the Author

Labs_Editorial