- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- Going beyond large language models with smart appl...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Going beyond large language models with smart applications
Understand the world of AI agents, large language models, and smart apps. This report from AIIA dives deep into the next-gen emerging stack for AI applications, covering LLMs, prompt engineering, retrieval augmented generation, and more.
In the modern tech landscape, there are few topics more critical than understanding how to use large language models (LLMs) and the next-generation AI stack. This new report from the AI Infrastructure Alliance (AIIA)* offers a deep dive on how to use this new AI stack to build โsmartโ applications. Smart applications take LLMs beyond the confines of text generation to perform useful, operational business tasks.
The role of AI agents and LLMs in AI-driven smart applications
AI agents are the underling components of smart applications. They can interact autonomously or semi-autonomously with their environment. Traditionally, agents were autonomous software that tried to achieve a goal in the digital or physical world. A good example of an agent would be software that automatically purchases plane tickets based on a given itinerary. Agents can vary in complexity from chatbots that generate context aware text to software used in autonomous driving.
The release of Large Language Models (LLMs) marked a sea change, shifting us from pure data science to the dawn of AI-driven smart applications. LLMs have brought complex ML models to the masses through easily consumable APIs or web-hosted GUIs. As a result, we are increasingly seeing many traditional developers build these smart apps either without data science teams or with smaller supporting data science teams by leveraging LLMs.
Large language models (LLMs) like ChatGPT and GPT-4 have changed the possibilities for agent capabilities by providing the intelligence capable of performing a wide range of tasks, from planning and reasoning to answering questions and making decisions. For example, chatbots have been around a long time but theyโre no longer confined to rules-based engines with canned answers but can provide context-aware, dynamic responses.
However, LLMs have a number of well-known flaws, such as hallucinations, which boil down to making things up, ingesting the biases of the data set it was trained on, or all the way to having confidence in wrong answers because the model can't link the text it's generating to real-world knowledge. For example, it does not know the world is round and so occasionally hallucinates that it's flat.
This AIIA report offers a detailed examination on different methods to overcome the limitations of LLMs, including:
- Zero-shot and few-shot prompting
- Retrieval-augmented generation (RAGs) using vector databases and frameworks
- Fine tuning of open source and closed source LLMs
- Common application design patterns
If youโre ready to get started with LLMs, HPE can accelerate your journey with the HPE Machine Learning platform, an AI-native solution built for enterprise scale. One of the key components of this platform, the HPE Machine Learning Development Environment offers direct support for LLMs with prompt engineering, RAGs, fine tuning and full training.
AI agents and LLMs are driving the next generation software stack, creating smart applications that offer a new way to interact with the world and make decisions independently of human intervention.
Download the report now and learn how you can leverage LLMs and the next generation software stack to power your business forward.
*A non-profit organization with 30+ global members including HPE, AI Infrastructure Allianceโs mission is to create a robust collaboration environment for companies and communities in the artificial intelligence (AI) and machine learning (ML) space. The alliance brings together top technologists across the AI spectrum and includes a wide range of member companies. Together, these companies and communities provide a glimpse into the future where AI creates real value for everyday businesses and not just big tech powerhouses.
Meet Bhavani Rao, HPE AI Product Marketing Manager
Bhavani is a Product Marketing Manager, responsible for product messaging and positioning at HPE AI solutions. He has a diverse background that spans MLOps, DevOps, CI/CD, relational, and NoSQL databases. A recent convert to the potential of AI/ML, Bhavani is passionate about technology and how it can be leveraged to solve customer problems. Throughout his career, Bhavani has promoted these learnings and best practices in numerous industry gatherings and publications.
Compute Experts
Hewlett Packard Enterprise
twitter.com/HPE_Cray
linkedin.com/showcase/hpe-servers-and-systems/
hpe.com/supercomputing
- Back to Blog
- Newer Article
- Older Article
- Back to Blog
- Newer Article
- Older Article
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
- kambizhakimi23 on: HPE extends supply chain security by adding AMD EP...
- pavement on: Tech Tip: Why you really donโt need VLANs and why ...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
131 -
HPC & SUPERCOMPUTING
131 -
Mission Critical
86 -
SMB
169