Loading video player...
You don’t know what your agent will do until it’s in production. In this technical deep dive, learn why production monitoring for AI agents requires a new approach to observability. When you ship traditional software to production, you usually have a good sense of what to expect. Users click buttons, fill out forms, and navigate through more predictable paths. Your test suite may cover a majority of code paths, and monitoring tools track the usual signals: error rates, response times, and database queries. When something breaks, you look at logs and stack traces. Agents operate differently. They accept natural language input, where the space of possible queries is unbounded. They are powered by LLMs that are sensitive to subtle changes in prompts and can produce different outputs for the same input. They also make decisions across multi-step workflows, tool calls, and retrieval operations that are difficult to fully anticipate during development. Technical guide: https://www.langchain.com/conceptual-guides/production-monitoring