- Community Home
- >
- HPE AI
- >
- AI Unlocked
- >
- The power of GenAI, enabled by HPE AI Essentials S...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
The power of GenAI, enabled by HPE AI Essentials Software
Simplify GenAI workflows and accelerate innovation with HPE AI Essentials Software and HPE Private Cloud AI, offering enhanced security, greater flexibility, and access to the most advanced AI tools.
AI is everywhere. Whether it is a customer service chatbot on your favorite retail website or your next suggested watch on a streaming service. Enterprises have even adopted AI solutions to enhance and grow their businesses and outpace their competitors.
Deploying intuitive AI solutions at scale, however, does present some challenges. There are many questions that enterprises must ask themselves. For example, where is the model hosted and how do I protect my data? How do I integrate and maintain these tools over their lifecycle? How can I scale my AI solutions to handle user growth and new use cases? And finally, how can I effectively manage this AI investment with predictable costs?
Accelerate AI innovation with HPE Private Cloud AI and HPE AI Essentials Software
Facing these challenges, we've found the answer. Launched in June 2024, HPE Private Cloud AI, part of the NVIDIA AI Computing by HPE portfolio, is a turnkey, scalable, AI-optimized private cloud. It accelerates AI adoption and deployment while keeping your data secure. This solution integrates NVIDIA accelerated computing, networking, and software with our high performance compute, storage, and HPE GreenLake cloud.
For enterprise AI innovators, HPE Private Cloud AI provides a purpose-built foundation anchored by HPE AI Essentials Software. Integrated into the software and data layer, this delivers a comprehensive, ready-to-run suite of AI and open-source tools for efficient end-to-end GenAI solution development.
Figure 1. HPE AI Essentials Software—the AI development platform
HPE AI Essentials Software caters to all users, from beginner to expert. It offers no-code / low-code features and automated accelerators for beginners to rapidly create GenAI applications like chatbots and productivity tools. Experienced developers gain access to APIs, notebooks, and advanced tools for swift experimentation, iteration, and deployment, overcoming IT resource bottlenecks.
Robust toolset for data engineers, data analysts, and data scientists
The software organizes open-source tools by user persona. Data engineers use Airflow, EzPresto, and Superset for streamlined data workflows, fast querying, and customizable visualizations. Data analysts benefit from Livy, Spark History Server, and Spark Operator for simplified job submission, comprehensive insights, and efficient Spark workload management on Kubernetes, enabling quicker, more insightful analysis. Data scientists can use Kubeflow to scale, reproduce, and manage complex machine learning (ML) pipelines while Ray offers a unified framework for scaling AI and Python applications. MLflow centralizes ML lifecycle management, from experimentation to deployment and monitoring. Recognizing specific user needs, the HPE AI Essentials Software UI allows easy import of custom tools and frameworks.
Upcoming enhancements to HPE AI Essentials Software
At Hewlett Packard Enterprise, we know the right tools are key to AI innovation. That’s why we’re excited to be bring new and enhanced features to HPE AI Essentials Software. Customers can get a comprehensive solution to accelerate their AI development with compliance, security, and enterprise-grade control across the AI lifecycle.
Low-code / no-code RAG with your connected data
I’m particularly excited about the knowledge base in HPE AI Essentials Software, a platform for both pro-code and no-code users. This inclusive feature allows all skill levels to build and deploy GenAI solutions that personalize LLMs with their own data for customized, relevant outputs. Notably, it enables rapid development and iteration for no-code users building chatbots, virtual assistants, or other AI-driven tools while prioritizing model security.
Figure 2. Example of a three-step, low-code process to develop GenAI apps
Regardless of whether you’re creating chatbots, virtual assistants, or other AI-driven tools, HPE AI Essentials Software prioritizes the security of your models.
Figure 3. Workflow from user query to answer using a company's knowledge base
Knowledge base offers several benefits: secure data connections to foundation models for accurate and relevant AI responses; a comprehensive managed RAG workflow including ingestion, retrieval, and automation; custom control and flexibility with advanced options like custom prompts, endpoints, and API management; and a built-in playground for real-time interaction and fine-tuning. These capabilities in HPE AI Essentials Software streamline GenAI workflows, enhance flexibility and customization, and help teams stay agile and competitive.
Secure external access to endpoints: Providing flexibility and protection
Security is a top priority, especially with the rise of ML/AI models, making endpoint security crucial for protecting sensitive data and preventing unauthorized access. HPE AI Essentials Software allows enterprises to create and manage secure API keys for authorized external access to deployed models and RAG endpoints, even outside the software clusters. AI admins gain granular control over this access, helping ensure only authenticated entities can interact with these sensitive endpoints.
Figure 4. Securely expose endpoints to internal and external applications
This enhanced security and control provides customers with:
- Easy access: Securely expose HPE AI Essentials Software model endpoints for internal or external use without requiring extensive Kubernetes or networking expertise.
- Flexible API management: Admins can generate, manage, and revoke API keys, helping ensure only authorized users and applications interact with deployed models.
Cutting-edge AI tools and frameworks
By leveraging production-ready Meta Llama-3.1 (8B and 70B parameter versions) with NVIDIA Inference Microservices (NIM), HPE AI Essentials Software allows you to utilize the latest AI advancements, keeping your models at the cutting edge.
- Simple, flexible vLLM deployment: Deploy directly from Hugging Face:To enhance flexibility within HPE AI Essentials Software, the HPE Machine Learning Inferencing Software now supports direct deployment of vLLM-compatible models from Hugging Face.
- Hugging Face vLLM access: Browse and select vLLM-compatible models directly from Hugging Face website through a new registry
- UI-based vLLM selection: Easily choose vLLM models from Hugging Face using the integrated UI browser
- Optimized vLLM deployment: Deploy vLLM models using the new vLLM format, defaulting to the vllm/vllm-openai:v0.6.2 runtime (with automatic CPU fallback)
GPUDirect RDMA: Enabling ultra-efficient data transfers
We’re excited to preview GPUDirect RDMA for HPE Private Cloud AI, a groundbreaking technology that significantly accelerates direct data transfer between GPUs, bypassing the CPU and host memory. This unlocks substantial efficiency gains for demanding AI and data analytics workloads like distributed training, real-time processing, and high performance networking by enabling ultra-low latency and high throughput, optimizing your system for peak performance.
Drive AI-powered transformation
HPE Private Cloud AI, powered by HPE AI Essentials Software, provides your organization with the necessary tools, adaptable frameworks, and inherent flexibility to drive secure and efficient AI innovation. Whether you’re building custom AI models, deploying advanced GenAI solutions, or managing complex AI workflows, HPE AI Essentials Software offers the robust capabilities to establish your organization as an AI productivity and progress leader.
See firsthand how HPE AI Essentials Software streamlines your AI processes—watch the demo now.
Learn more at
hpe.com/private-cloud-ai
Meet the Author:
Jenna Colleran, Senior Product Manager, HPE
- Back to Blog
- Older Article
- Dhoni on: HPE teams with NVIDIA to scale NVIDIA NIM Agent Bl...
- SFERRY on: What is machine learning?
- MTiempos on: HPE Ezmeral Container Platform is now HPE Ezmeral ...
- Arda Acar on: Analytic model deployment too slow? Accelerate dat...
- Jeroen_Kleen on: Introducing HPE Ezmeral Container Platform 5.1
- LWhitehouse on: Catch the next wave of HPE Discover Virtual Experi...
- jnewtonhp on: Bringing Trusted Computing to the Cloud
- Marty Poniatowski on: Leverage containers to maintain business continuit...
- Data Science training in hyderabad on: How to accelerate model training and improve data ...
- vanphongpham1 on: More enterprises are using containers; here’s why.