- Community Home
- >
- Solutions
- >
- Tech Insights
- >
- AI in production: the Intel advantage
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
AI in production: the Intel advantage
The biggest misconception about artificial intelligence is that itโs a stand-alone solution that always requires significant dedicated hardware. In reality, and in most successful scale deployments, AI is a tool that enhances a solution or enables a new application. AI applications are typically deployed alongside several other workloads in your current data center, so your computing and storage infrastructure needs to be optimized for that mixed workload environment.
This is where the Intel advantage kicks in.
Because AI is increasingly becoming pervasive across segments and applications, Intel is infusing AI performance into our edge-to-cloud portfolio. Our customers are running AI workloads in the cloud, data centers, edge appliances, personal computers, and devices. Intelโs goal is to equip the base platform with as much AI performance as possible and then offer a flexible edge-to-cloud portfolio of accelerators for solutions that require additional discrete acceleration.
An example of how weโre approaching AI in the base platform is the addition of Intelยฎ Deep Learning Boost to the 2nd Generation Intelยฎ Xeonยฎ Scalable processors. This feature merges three neural network instructions into one, provides up to 30x performance improvement for deep learning inference compared to the previous generation, and is accessible in general purpose, as well as, specialized platforms. The new 3rd Generation Intelยฎ Xeonยฎ Scalable CPUs continue Intelโs leadership for built-in AI acceleration and are the first general purpose CPUs with bfloat16 numeric support, which increases deep learning training and inference performance with minimal (if any) accuracy loss or code changes.
Another example is memory. Both training and inference need reliable memory closer to the processor. Intelยฎ Optaneโข Persistent Memory allows you to affordably expand a large pool of memory closer to the CPU, so you can train on a much larger data set. Many computer vision applicationsโsuch as medical diagnostic or seismic imagingโperform best without tiling or down-sampling required with more limited memory architectures.
So, when you buy the latest and greatest HPE servers you will have built-in AI performance with Intel Xeon Scalable processors and Intel Optane Persistent Memory. A majority of AI-based applications, especially classical machine learning and deep learning inference, can run well on general-purpose systems and in mixed workload environments.
But there is one key component, a secret weapon, that will keep costs down: software.
AI can get expensive very fast. Applications whose algorithms are not optimized can easily cost two or three times more than they should because youโre throwing more hardware and storage at the problem rather than fine tuning the application. Intel optimizes the most widely adopted AI software like TensorFlow, PyTorch, Intelยฎ Distribution of Python* (with scikit-learn, Pandas, and NumPy) and more tools for machine and deep learning. The Intelยฎ Distribution of OpenVINOโข Toolkit allows you to implement computer vision and deep learning inference quickly and with peak performance across multiple applications and hardware platforms. These software tools and optimized libraries are open source and free to use, and they are validated on HPE hardware.
For example, if youโre investing in new infrastructure, the HPE Superdome Flex 280 is your one system to power digital transformation. It can comb through massive AI datasets at the edge or in the core. The system features 3rd Generation Intel Xeon Scalable processors, runs Linux, VMware, and Windows, and fits in a standard rack with standard I/O and connectivity.
Whether youโre building AI into your in-house solutions or buying off-the-shelf enterprise applications, making sure your software is optimized and taking advantage of features built into your general-purpose infrastructure will increase your return on investment. Not optimizing your applications is like trying to win a soccer game by purchasing more uniforms rather than training the players. Intel and HPE are here to help with infrastructure configuration, software optimization, and ecosystem matchmaking to ensure you have access to reliable, accessible, and affordable AI-based applications.
Monica Livingston
AI Sales Director, Intel
- Back to Blog
- Newer Article
- Older Article
- Amy Saunders on: Smart buildings and the future of automation
- Sandeep Pendharkar on: From rainbows and unicorns to real recognition of ...
- Anni1 on: Modern use cases for video analytics
- Terry Hughes on: CuBE Packaging improves manufacturing productivity...
- Sarah Leslie on: IoT in The Post-Digital Era is Upon Us โ Are You R...
- Marty Poniatowski on: Seamlessly scaling HPC and AI initiatives with HPE...
- Sabine Sauter on: 2018 AI review: A year of innovation
- Innovation Champ on: How the Internet of Things Is Cultivating a New Vi...
- Bestvela on: Unleash the power of the cloud, right at your edge...
- Balconycrops on: HPE at Mobile World Congress: Creating a better fu...
-
5G
2 -
Artificial Intelligence
101 -
business continuity
1 -
climate change
1 -
cyber resilience
1 -
cyberresilience
1 -
cybersecurity
1 -
Edge and IoT
97 -
HPE GreenLake
1 -
resilience
1 -
Security
1 -
Telco
108