Behind the scenes at Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

Machine learning needs more than speed: Labs researchers to address Samsung developers

Curt_Hopkins

NataliaSergey2.jpgResearch engineer Sergey Serebryakov and research manager Natalia Vassilieva

By Curt Hopkins, Managing Editor, Hewlett Packard Labs

On Thursday, May 18, Labs’ researchers Natalia Vassilieva and Sergey Serebryakov are giving a talk to the developers and product planning team in Samsung’s memory business area.

“Machine Learning meets Memory-Driven Computing” will be presented at Samsung’s San Jose campus. The talk will be focused on performance issues in training deep neural networks and hardware requirements for machine learning workloads, specifically, the importance of the right ratio between compute and communication.

“They want to learn where HPE is going,” said Vassilieva, what Memory-Driven Computing is, and why and how we think that Memory-Driven Computing will help machine learning.”

Vassilieva and Serebryakov will take a deep technical dive into both machine learning and deep learning, including scalability and performance of training and inference, applications beyond vision and speech, and how to choose the right hardware and software stack.

Both machine learning and deep learning need more than speed.

“Once we have a system with high FLOPS,” said Vassilieva, “we need to ‘feed’ these FLOPS with data. Communication is the bottleneck today in scale-out world. We believe that MDC is the solution.”

From the abstract:

Machine learning, and deep learning in particular, depends on possessing very large datasets and requires large computational resources and massive amounts of easily accessible memory. Existing hardware is not sufficient to train large enough models on large enough datasets fast enough and thus limits the power of deep learning. Today it can easily take several days to train a state-of-the-art model. The training of some models is particularly hard to scale-out on existing hardware due to increasing communication overhead as more nodes are added. Memory-Driven Computing architecture by HPE combines a large number of compute cores, massive pools of shared memory and photonic interconnect with enormous bandwidth and low latency. We believe that this new architecture enables a shift in machine learning.

Both HPE and Samsung are members of GenZ, a consortium devoted to creating and promoting open interconnect standards.

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs