Loading video player...
Build Durable AI Agent Pipelines That Auto-Recover Using Agentspan with LangGraph/LangChain/CrewAI/OpenAI SDK etc For business enquiries: ai@futurminds.com š Resources & Links: - Agentspan: https://agentspan.ai/ - Resources used in this video: https://www.skool.com/ai-architects-8928 Thanks to Agentspan for sponsoring this video. ----------- Build crash-proof AI agent pipelines that survive process death ā no more duplicate emails, wasted tokens, or lost state on restart. This step-by-step tutorial shows how Agentspan, a free open-source runtime powered by Netflix Conductor, adds durable execution to your existing LangGraph, CrewAI, or OpenAI Agents SDK pipelines with just a few lines of code. Agentspan separates your code from the execution state. Your Python process becomes a stateless worker while the Agentspan server persists every completed tool call and LLM call independently. If your process crashes, the server dispatches only the remaining work to the next worker ā completed steps never re-run. Each tool call compiles to its own Conductor task, giving you per-tool-call durability instead of node-level checkpointing. The blast radius of a crash is one tool call, not the entire pipeline. ā° Timestamps: 0:00 - Why AI Agent Pipelines Break in Production 0:36 - What Is Agentspan 1:20 - The In Process Execution Problem 3:09 - How Agentspan Solves Crash Recovery 4:33 - Powered by Netflix Conductor 4:55 - Before (LangGraph Pipeline Code Walkthrough) 6:50 - Running the Fragile Pipeline 7:30 - Installing Agentspan 3 Steps 8:30 - After (Agentspan Code Walkthrough) 11:05 - Live Demo (Crash and Resume) 14:10 - Automatic Retries and Observability Dashboard 14:55 - Full Feature List and Next Steps š Key Takeaways: ⢠Add durable execution to existing agent pipelines with pip install Agentspan and a @tool decorator ⢠Each tool call persists independently ā completed tools never re-fire after a crash ⢠The Agentspan server handles automatic retries with exponential backoff for 429/5xx errors ⢠Built-in dashboard at localhost:6767 shows every LLM call, tool call, token count, and timing ⢠Workers are stateless and disposable ā scale by adding more, state lives on the server ā FAQ: Q: Does Agentspan replace LangGraph or CrewAI? A: No ā Agentspan layers on top of your existing framework. You keep your agents, tools, and pipeline logic. Agentspan adds durable execution by moving orchestration state to a separate server. Q: How does Agentspan prevent duplicate tool calls after a crash? A: Each tool call compiles to its own persisted task on the Conductor server. When a worker reconnects after a crash, the server skips all completed tasks and only dispatches the remaining work. Q: Is Agentspan free and self-hosted? A: Yes ā Agentspan is MIT-licensed, fully open source, with no cloud service, no telemetry, and no paid tiers. The server runs on your machine using SQLite for dev or PostgreSQL for production. Q: What LLM providers does Agentspan support? A: Agentspan supports 15+ providers including OpenAI, Anthropic, Gemini, Mistral, Groq, and Ollama. Switch providers by changing one string in the Agent definition. š¢ Subscribe to FuturMinds for weekly AI agent tutorials and automation guides ā helping developers build production-ready AI systems. Whether you're looking for crash recovery for AI agent pipelines, durable execution for LangGraph workflows, or a way to prevent duplicate tool calls in production, this tutorial covers the complete setup from pip install to live crash-and-resume demo using Agentspan with Netflix Conductor.