Loading video player...
The Datadog MCP Server acts as a bridge between your observability data in Datadog and any AI agents that support Model Context Protocol (MCP). Building the server well turned out to be harder than it seems because when you build an MCP server, you don't control the agent calling it or the model powering it. You're designing for a very general space, where the same server might be called by Claude Code, Cursor, or something completely custom... and it could be powered by anything from a small, local LLM to a large foundation model. In this episode of Datadog Illuminated, Reilly Wood, Staff Engineer on Datadog's MCP team, walks through how the team built the Datadog MCP server from scratch. How do you keep an agent efficient when context windows are limited? How do you prevent tool overload? What actually helps guide an agent toward self-correction when things go wrong? Reilly answers these questions and shares how to help the LLM navigate a user's Datadog environment reliably and without wasted tokens. Hosted by Whitney Lee, Senior Technical Advocate at Datadog. Watch all Datadog Illuminated episodes → https://www.youtube.com/playlist?list=PLVOmGuoGYFgpj1-kAXLRKmFWqZ99HAHu7 #Datadog #MCP #MCPServer #AIAgents #LLM #DatadogIlluminated ▬▬▬▬▬▬ Related Links ▬▬▬▬▬▬ 🔗 Datadog MCP Server → https://docs.datadoghq.com/bits_ai/mcp_server/ 🔗 MCP Protocol Spec → https://spec.modelcontextprotocol.io/ 🔗 TOON (Token Optimized Object Notation) → https://github.com/toon-format/toon 🔗 Board content gist → https://gist.github.com/wiggitywhitney/f919d9d9e571e8bf6f5fba4a81fcbc55