- Community Home
- >
- Company
- >
- Behind the scenes at Labs
- >
- Is Apache Spark the best candidate for a distribut...
-
- Forums
-
Blogs
- Hybrid Cloud
- Edge
- Data & AI
- Working in Tech
- AI Insights
- Alliances
- Around the Storage Block
- Behind the scenes at Labs
- Careers in Tech
- HPE Storage Tech Insiders
- Inspiring Progress
- IoT at the Edge
- My Learning Certification
- OEM Solutions
- Servers: The Right Compute
- Shifting to Software-Defined
- Telecom IQ
- Transforming IT
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Russia
- HPE Blog, UK & Ireland
- Blogs
-
Quick Links
- Community
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Contact
- Email us
- Tell us what you think
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content
Is Apache Spark the best candidate for a distributed deep learning platform?
By Curt Hopkins, Managing Editor, Hewlett Packard Labs
A tutorial created by Labs senior research engineer Alexander Ulanov is now available on O’Reilly’s Data Tools webcast series.
“Distributed deep learning on Spark” addresses the popular area of machine learning, but with a twist.
“Deep learning models that are used in practice for image classification and speech recognition contain a huge number of weights, require a lot of computations, and are trained with large datasets,” said Ulanov.
Training models with such complexity can take days – months even – on a single machine. Ulanov’s tutorial explores how to scale out the training using distributed computations and data processing.
Specifically, Ulanov looks at Apache Spark as a contender for such a distributed training platform. He offers an overview and comparison of a number of different tools and frameworks that have been proposed for performing deep learning on Spark and compares them and explores the limitations of distributed training itself.
- Back to Blog
- Newer Article
- Older Article
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content
- Tony Mackey on: From Research to Reality: Exascale computing in th...
- luis del rio sampietro on: Research to Reality: Obsoleting the electron (PODC...
- Fixhere on: The era of real-time everything
- Eduardo Vega on: For sale: Memory-Driven Computing
- Dejan Milojicic on: Labs distinguished technologist talks about the fu...
- pernikahan on: (VIDEO) From the Lab: Novel accelerators for the f...
- Irene Ovonji-Odida on: Labs intern Elizabeth Liri wins Best in Class for ...
- Campbellja on: (PHOTO ESSAY) The cook in her kitchen: A photograp...
- Steve Shaw on: Stan Williams: a retrospective
- luis del rio sampietro on: HPE DISCOVER: Demos are the best way to lay your h...
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2019 Hewlett Packard Enterprise Development LP