- Community Home
- >
- Company
- >
- Advancing Life & Work
- >
- MLCommons for Machine Learning Benchmarking Launch...
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content
MLCommons for Machine Learning Benchmarking Launches with HPE as a Founding Member
As of December 3, 2020, leading machine learning (ML) benchmark ML Perf is under a new non-profit organization called MLCommons. MLCommons is an open engineering consortium that brings together academic and industry to develop the MLPerf benchmarks, best practices and publicly available datasets. Benchmarks like MLPerf serve a critical function by creating a shared understanding of performance and progress. These standardized benchmarks are critical for consumers and manufacturers to compare how various products perform on a level playing field, thereby improving competition and innovation in the marketplace, and helping the whole industry focus on the right problems โ moving everyone forward.
As a founding member of MLCommons, HPE is strategically aligned to help the marketplace set specific benchmarks for how machine learning performance gets measured and help our customers make more informed decisions on their AI infrastructure. Previously, there were no such standards and consumers had many questions including:
- What is the best hardware and software to run these workloads?
- Is storage important and when do CPUs become a bottleneck?
- What is the role of memory and do I need to buy the most expensive GPU?
- Do I need an ultra-fast interconnect between GPUs to run typical deep learning workloads?
In 2018, MLPerf was established as a result of previously existing benchmark efforts in various industries and academia. A collaboration with a large number of companies including HPE, start-ups, and universities resulted in multiple standardized deep learning benchmarks that are now widely recognized in the market. The creation of MLCommons is the next step in this evolution to create even better benchmarks for the marketplace.
โHPE joined MLPerf as a supporting organization and became a founding member of MLCommons because of our expertise in creating hardware optimized for deep learning workloads,โ says Sergey Serebryakov, Hewlett Packard Labs senior research engineer. โHPE benchmark and performance engineers have been running deep learning benchmarks and optimizing our systems for many years, and we would like to help shape future benchmarks that represent real-world workloads of our customers.โ
Serebryakov has been working with the MLPerf Best Practices working group on MLCube announced today, which reduces friction for machine learning by ensuring models are easily portable and reproducible (e.g., between stacks such as different clouds, between cloud and on-prem, etc.). Jacob Balma, HPC AI engineering researcher at HPE, co-chair of the MLPerf HPC working group, has helped develop deep learning benchmarks for high performance computing (HPC) systems which expose file system I/O, communication bottlenecks, and convergence differences at scale between hardware.
The MLPerf HPC benchmark suite includes two benchmarks that capture key characteristics of scientific ML workloads on HPC systems such as volumetric (3D), multi-channel scientific data, and dataset sizes in the 5-9 TB range. First results were announced on November 18th, 2020. It included submissions from several submitters in the top 500 supercomputer list.
In the ever-changing world of machine learning, artificial intelligence (AI), and HPC, HPE and MLCommons will continue to work closely together to support common like-for-like benchmarks, and develop and share public data sets and best practices that will accelerate innovation, raise all ships and increase MLโs positive impact on society.
Curt Hopkins
Hewlett Packard Enterprise
twitter.com/hpe_labs
linkedin.com/showcase/hewlett-packard-labs/
labs.hpe.com
- Back to Blog
- Newer Article
- Older Article
- HPE-Editor on: Human Element: A drive to excel and inspire the ne...
- Niranjan Sahoo on: Networking and Distributed Systems Labโs intern wi...
- MrinaliniNayak1 on: Comparing my career journey with my hobby!
- MargaretN on: Careers in Tech is moving to the new HPE blog, "Ad...
- TaraPangakis on: An Intern to a Lifer - HPE is Home
- DesireeCH on: A business trip gave me one amazing travel experie...
- ChatterjeeS on: Lessons learned in losing
- LPEditor on: HPE Discover Virtual Experience โ Your Registratio...
- Curt_Hopkins on: Hewlett Packard Labs researchers recognized in Top...
- Anu_Mercian on: My career transition from product to research at H...
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP