HPE OpsRamp
1840181 Members
2768 Online
110162 Solutions
New Discussion

LLM Observability in OpsRamp

 
lsantiagos01
Occasional Contributor

LLM Observability in OpsRamp

Hello,

Is there any roadmap or recommended approach within OpsRamp LLM Observability?

I’m interested in understanding how the platform can be used or integrated to observe Large Language Model (LLM) workloads, including inference metrics, GPU utilization, response latency, endpoint availability, and vector/embedding behavior.

Does OpsRamp currently provide any native integration, API Poller, or customizable template that could be adapted to monitor AI or LLM inference pipelines, whether deployed locally or via providers like OpenAI, Hugging Face, or Azure AI?

Thank you,
lsantiagos01