- Community Home
- >
- Company
- >
- Advancing Life & Work
- >
- Labs, UCSB Paper Aims to Solve Higher-Order Optimi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Labs, UCSB Paper Aims to Solve Higher-Order Optimization Problems
One of Hewlett Packard Labsโ most successful areas of research today involves the development of hardware โacceleratorsโ that help IT systems keep up with runaway demands for compute power and energy in the age of AI. While accelerators come in many forms and serve diverse functions, they share one characteristic: They handle specific tasks more efficiently than general purpose computers.
Given this backdrop, an engineering team at Labs embarked on a project to focus on the following challenge: reducing the required data movements between compute and memory elements while solving challenging optimization problems. Team members realized analog devices can represent information in a more compact and energy-efficient way than digital electronic (binary) devices, and consequently decided to design dedicated analog electronic hardware for such problems. As the Labs team pursued its research, it struck a partnership with a team from the University of California Santa Barbara (UCSB) that was working on a similar initiative. The groups have now published a paper with their findings.
The paper, titled โComputing High-Degree Polynomial Gradients in Memory,โ appeared in the Sept. 18 issue of the scientific journal Nature Communications. Nature Communications editors later featured the paper in the publicationโs Editorโs Highlight pages, which showcase the 50 best recent papers in a given area of research.
Finding the right pathway
In mathematics a โgradientโ describes the slope of a multidimensional surface. In optimization algorithms, a gradient represents a measure of โsteepnessโ defined a different way: the sensitivity of the figure of merit with respect to changes in the input variables. Understanding where changes are steep and where they are not helps to identify potential pathways toward better solutions. Complex tasks in extremely complex projects like AI modeling could follow an exponential number of pathways, so determining the right one can save a lot of time and work.
The hardware developed by the researchers from Labs, UCSB and other partners saves a step in the computation of optimal gradients for specific tasks. Hereโs how the process works.
Previously, optimization problems with large numbers of competing solutions could be solved using analog hardware โ be it quantum, optical or electronic โ only if these problem instances were first mapped to quadratic polynomial objective functions. The so-called systems that performed the calculations couldnโt scale up to handle โhigher-orderโ problems in scenarios like protein folding and AI-model verification that contain more complicated interactions (i.e., beyond quadratic) between the variables. So, the extra step was required to convert calculations from one mode to another, which slowed down processing speeds.
Thomas Van Vaehrenbergh โWe showed how advanced combinatorial optimization problems do not need to be preprocessed to a less efficient format with additional auxiliary variables that would slow down the problem solution,โ said Thomas Van Vaerenbergh, a research engineer at Hewlett Packard Labs, and one of the nine co-authors of the paper.
In a blog published on the UCSB Engineering site, one of the paperโs lead authors said most of the currently proposed, state-of-the-art hardware cannot efficiently solve problems with a large numbers of industrially relevant optimization combinations.
โThe main benefit of our hardware,โ UCSB PhD student Tinish Bhattacharya said, โis that it can solve problems like Boolean Satisfiability in their native high-order space without having to do any pre-processing, potentially providing exponential speedup over current hardware architectures that are limited to second-order objective functions.โ
Bringing compute closer to memory
Another key feature of the hardware is its ability to perform in-memory computing, which is an alternative computing architecture where computations are executed within the memory array itself. This helps in significantly mitigating the memory-compute bottleneck by bringing the compute closer to the memory.
Van Vaerenbergh said future research will focus on additional problem classes, as well as problems with more variables.
โThere is still a big gap between our small-scaled prototype and being able to really have an impact on the problems that people care about at the scale people care about,โ he said. โAt some point weโll see that our analog computer has a size constraint. Weโre not going to be able to map one big problem compute onto one chip. Weโre going to have to work with multiple engines talking to each other over a network. Thatโs what our current research is about โ proving that the benefit is still there when we work with multiple engines. And if the benefit is still there, then maybe eventually we can make a product out of this.โ
The โComputing High-Degree Polynomial Gradients in Memoryโ paper was published by Thomas Van Vaerenbergh, Giacomo Pedretti, Xia Sheng, Ray Beausoleil and Jim Ignowski of Hewlett Packard Labs; Tinish Bhattacharya, George H. Hutchinson and Dmitri B. Strukov of UCSB; and John Paul Strachan of FZJ.
- Back to Blog
- Newer Article
- Older Article
- MandyLott on: HPE Learning Partners share how to make the most o...
- thepersonalhelp on: Bridging the Gap Between Academia and Industry
- Karyl Miller on: How certifications are pushing women in tech ahead...
- Drew Lietzow on: IDPD 2021 - HPE Celebrates International Day of Pe...
- JillSweeneyTech on: HPE Tech Talk Podcast - New GreenLake Lighthouse: ...
- Fahima on: HPE Discover 2021: The Hybrid Cloud sessions you d...
- Punit Chandra D on: HPE Codewars, India is back and it is virtual
- JillSweeneyTech on: An experiment in Leadership โ Planning to restart ...
- JillSweeneyTech on: HPE Tech Talk Podcast - Growing Up in Tech, Ep.13
- Kannan Annaswamy on: HPE Accelerating Impact positively benefits 360 mi...