- Integrated Systems
- About Us
- Integrated Systems
- About Us
Know the core capabilities for machine learning compute success
The Gartner Research Market Guide identifies six core capabilities necessary in machine learning compute infrastructures to enable success. Learn what they are and how you can implement them in your business.
As the digital economy takes shape, an organization's ability to leverage artificial intelligence (AI) and machine learning (ML) is becoming increasingly important. Being able to seamlessly meet ever-evolving customer expectations requires the capabilities only next-generation technology offers. And it's ML that makes it possible to immediately identify patterns and make adjustments as market preferences change. While these capabilities were once a luxury, they are rapidly becoming a necessity.`
However, many businesses are hesitant and unsure of how to adopt these innovative technologies. Gartner's September 2018 Market Guide on Machine Learning Compute Infrastructures states, the "landscape for ML compute infrastructure is fragmented and rapidly changing, making it tough for enterprises to navigate the market and filter the vendor marketing obfuscations. Integrating diverse system software components including libraries, drivers, and diverse ML and DNN [deep neural network] frameworks can be complex and time-consuming, and require additional skills."
The importance of AI and ML can't be overestimated
Yet developing the ability to navigate this heavily fragmented market could prove instrumental as businesses look for new ways to compete within an increasingly digital economy. As discussed in a recent Forbes article, moving forward, AI and ML will empower everything from optimized threat management to expanded edge computing.
To help organizations take advantage of this innovative technology, Gartner identified six core capabilities needed in ML compute infrastructures "to enable high-productivity AI pipelines involving compute-intensive ML and DNN models." Specifically, these capabilities are: compute acceleration technologies, accelerator density, high-speed compute interconnect, network connectivity, local storage, and ML/DNN frameworks.
HPE hardware has the kind of capabilities you require
Selecting technology that includes these capabilities is a key piece of solving the AI/ML puzzle. HPE, which was identified as a Representative Vendor in this Gartner report, meets many of these core capabilities with its expansive line of servers.
With one of the broadest portfolios of AI systems and services currently available, HPE can address an extensive range of ML use cases in numerous industries. HPE's data center to edge portfolio of solutions includes scale-up rack and modular solutions alongside scale-out supercomputer-class systems. These features can support a growing number of real-world applications and use cases, from detecting fraud in payment processing to improving healthcare diagnostic capabilities or farmers' crop management.
Furthermore, both the Apollo and ProLiant servers from HPE address the need for compute acceleration technologies by leveraging NVIDIA Tesla V100. These units are key components to achieving cost-effective acceleration because of the ability to empower "real-time services such as search, voice recognition, voice synthesis, translation, recommender engines, fraud detection, and retail applications."
The HPE Apollo systems can offer free deployment of NVIDIA GPU Cloud when bundled with the Bright Cluster Manager for Data Science.
Strong storage and flexible consumption speed up value
Optimized storage—both local and cloud-based—is also pivotal in realizing AI/ML success. Gartner notes that “most ML compute infrastructures prefer to use solid-state drive (SSD)/flash to accelerate random small-file I/O operations.” HPE offers WekaIO for AI storage to ensure the increased I/O throughput required for deep learning training and inferencing. The better a system can reduce training time, the faster it can activate benefits from machine learning and deep learning.
Lastly, HPE GreenLake Flex Capacity enables flexible consumption models for infrastructure deployed on-premises. HPE GreenLake for Big Data and HPE GreenLake for SAP HANA provide consumption-based models for large-scale data and analytics needs. This combination optimizes the connection with local storage and network-connected environments for extensive deep learning processes as well as real-time action.
Don't forget services to help you implement the technology
Of course, there is a difference between possessing the technology capable of addressing ML/AI use cases and properly utilizing it. Most organizations need access to services that can facilitate skill development and utilization.
This is where expert services like HPE Pointnext prove useful. The Pointnext offering empowers organizations as they work to accelerate their time to value through rapid AI project delivery. Some of HPE Pointnext's services include flexible consumption models and proactive support capabilities that simplify hybrid HPC.
In partnership with NVIDIA, HPE's Deep Learning Cookbook provides a comprehensive set of tools to guide the choice of the best hardware/software environment for a given deep learning workload. It features use case-driven reference architectures and a complementary set of performance benchmarks designed to pro€vide a diverse set of neural networks and infrastructure combinations.
No organization can afford to fall behind in today's rapidly changing marketplace. Having the right infrastructure and partners in place can prove instrumental to navigating the growing ML/AI landscape. With its comprehensive portfolio of products and services, HPE has the right mix of technology and experience to help organizations across an array of industries successfully leverage these pivotal technologies.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Meet Infrastructure Insights guest blogger Peter Fretty. As a highly experienced journalist, Peter regularly covers technology advances, software advances, gadgets, and SMB issues. He has written thousands of feature articles, cover stories and white papers for an assortment of trade journals, business publications, and consumer magazines.
Hewlett Packard Enterprise