Loading video player...
Cisco AI-ready PODs are purpose-built infrastructure solutions designed to help organizations harness the power of AI—whether you’re just getting started or scaling complex, high-performance workloads. But how do you maintain visibility across every layer of that infrastructure? In this demo, we walk through how Splunk Observability Cloud delivers comprehensive, full-stack visibility into Cisco AI-ready PODs—from Kubernetes clusters and Red Hat OpenShift nodes all the way up to NVIDIA NIM-based LLM applications running on top. What you’ll see in this demo: - Monitoring 14+ Kubernetes clusters, including a Red Hat OpenShift cluster running on a Cisco AI Pod - Node-level and pod-level infrastructure metrics (CPU, memory, disk,network) - Application Performance Monitoring (APM) for a RAG application powered by NVIDIA AI Enterprise - Distributed tracing to identify a critical latency issue in an LLM inference workload - GPU utilization dashboards powered by metrics from Intersight and Nexus - Root cause identification: a single overloaded GPU causing user-facing slowdowns — and exactly how to fix it Splunk Observability Cloud gives AI infrastructure teams the visibility and context they need to detect problems faster, troubleshoot across the full stack, and optimize performance— so your AI workloads run the way they were designed to. 🔗 Learn more about Cisco AI-ready PODs: https://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/ai-pods-aag.html 🔗 Learn more about Splunk Observability Cloud: https://www.splunk.com/en_us/products/observability-cloud.html #Splunk #Cisco #AIPod #Observability #AIInfrastructure #NVIDIA #NIM #OpenShift #Kubernetes #GPUMonitoring #ITOps #AIOps