Loading video player...
Building production AI agents reveals a harsh truth: stateless architectures that work for simple demos become impossibly painful at scale. When long-running workflows fail, you lose all compute, progress, and user trust. This is why companies like OpenAI use Temporal for products like Deep Research—to build durable agents that recover from failures instead of forcing users to start over. In this talk, you'll learn how to: - Build resilient AI agents that survive crashes and resume from checkpoints - Implement durable execution with PydanticAI and Temporal - Gain production-grade observability with Pydantic Logfire and Evals - Compose multi-agent systems that handle failures gracefully - Stop burning money on failed agent runs that restart from scratch We'll walk through real code examples, including a Deep Research implementation that demonstrates how proper architecture turns fragile prototypes into production-ready systems. Links: - Demo code on GitHub: https://github.com/pydantic/pydantic-stack-demo/tree/main/durable-exec - Pydantic AI Documentation: https://ai.pydantic.dev/ - Temporal Integration Guide: https://ai.pydantic.dev/durable_execution/temporal/ - Pydantic Logfire Docs: https://logfire.pydantic.dev/docs/ Samuel Colvin is a Python and Rust expert. His work has redefined data validation and observability for developers. His Pydantic library powers 350M+ downloads every month, serving as a core dependency for OpenAI SDK, Anthropic SDK, LangChain, LlamaIndex, and countless other GenAI projects. --- Socials: - LinkedIn: https://www.linkedin.com/company/pydantic/ - X (Twitter): https://x.com/pydantic - GitHub: https://github.com/pydantic - Website: NA - Company: Pydantic (https://pydantic.dev)