- Community Home
- >
- Software
- >
- Software - General
- >
- Automating HPE Storage and Network Operations Usin...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago - last edited 2 weeks ago
2 weeks ago - last edited 2 weeks ago
Automating HPE Storage and Network Operations Using PowerShell and Ollama (local LLM)
Intoduction:
Integrating Large Language Models (LLMs) with PowerShell can significantly improve automation and administrative tasks, especially in Windows environments. This combination enables users to utilize the natural language processing capabilities of LLMs directly from the PowerShell command-line(cmdlet) interface.
As a PowerShell developer, need following benefits of using local LLMs(mainly Ollama) as some customers want to run scripts in their dark site or datacenter with out providing internet or proxy.
- No internet connection needed
- No API costs
- Complete privacy for sensitive data
- Perfect for repetitive tasks with clear rules
Details:
Local LLMs are AI models that operate directly on user personal computer or network. Typically, these models are smaller than the large-scale AI systems behind ChatGPT or Gemini. Because they require fewer resources, they can run efficiently on everyday consumer hardware without overloading CPUs.
While the local LLMs most users can run may not match the intelligence of models like Gemini, they excel at handling specific, focused queries. Moreover, their performance and capabilities continue to improve rapidly every day.
Key aspects:
Microsoft support to LLM:
Large Language Models (LLMs) can help generate Microsoft Graph API queries and other commands for managing Microsoft 365 services, making complex administrative tasks easier to perform.
LLM (Ollama integration) with PowerShell:
PowerShell scripts can communicate with locally hosted Large Language Models (LLMs), such as those provided by Ollama, through API calls. This allows users to send prompts and receive responses directly within the PowerShell environment, enabling functions like text generation, summarization, and question answering.
LLM command translation to powershel cmdlets:
Large Language Models (LLMs) can translate natural language instructions into PowerShell scripts or commands, simplifying the development of automation workflows for tasks such as system administration, configuration management, and more.
PEEL Intergration:
Modules such as PowerShell's AI Shell and external tools like Lemonade Server with PEEL enable Large Language Models (LLMs) to offer contextual assistance and generate PowerShell code snippets based on user queries or terminal history. These integrations can greatly enhance scripting efficiency and aid in troubleshooting tasks.
what is Ollama and how it works:
Ollama is a tool that lets you run Large Language Models (LLMs) locally on your own computer — no internet connection or cloud service required.
- It downloads and runs open-source LLMs like LLaMA 3, Mistral, Gemma, Code LLaMA, etc.
- Users can chat with these models, use them for text generation, summarization, coding help, and more.
- It provides a REST API so developers can integrate local LLMs into scripts and applications (like PowerShell, Python, etc.).
Install Ollama and pull llama3(3.1):
# Pull the model
ollama pull llama3
# Start chatting
ollama run llama3
Make sure it is running in the browser.
Algorithm or psedo code to run on PowerShell Editor:
# Ask a simple true/false question
$question = "question?"
# Define schema for true/false response
$schema = @{
type = "object"
properties = @{
answer = @{
type = "boolean"
}
}
required = @("answer")
}
# Create the payload
$body = @{
model = "llama3.1"
messages = @(
@{
role = "user"
content = $question
}
)
stream = $false
format = $schema
} | ConvertTo-Json -Depth 3
# Get the response
$response = Invoke-RestMethod -Uri "http://localhost:11434/api/chat" -Method Post -Body $body
# See raw output
$response.message.content # { "answer" : true }
# Output the result as a PowerShell object
$response.message.content | ConvertFrom-Json
Comparing major local LLM options:
When choosing a local LLM for PowerShell automation, here are some recommendations.
Use cases:
1. HPE Primera A670 storage using powershell and llama3(ollama)
Here there is an prompt asking llama3 with “Connecting to HPE Primera A670 storage using powershell”
Following is the output of the response with details along with example.
Note that, script has been invoked locally with out Internet.
2. HPE Auruba network using powershell and llama3(ollama):
Here are the results for prompt: "Configuring HPE Aruba switch using powershell"
Script outputs from powershell cmdlet using llama3(ollama) are very useful for the user to automate any HPE storage and network series.
Summary:
Using Ollama to run LLaMA 3 locally, PowerShell developers can bring the power of AI directly into Windows-based infrastructure tasks—without relying on cloud APIs or compromising data privacy. This integration enables natural language-driven automation for managing HPE storage systems and network switches.
By sending prompts from PowerShell to the locally hosted LLaMA 3 model via REST API, administrators can:
- Generate and validate complex CLI or API commands.
- Auto-generate PowerShell scripts based on intent (e.g., “create a volume on HPE 3PAR”)
- Summarize configuration files or logs.
- Answer questions about device commands or configuration syntax.
- Create human-readable documentation from raw command output.
Anand Thirtha Korlahalli
Infrastructure & Integration Services
Remote Professional Services
HPE Operations – Services Experience Delivery
I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
