Loading video player...
Production-grade AI agent systems face two fundamental challenges: maintaining context across sessions and managing the computational costs of multi-step agentic workflows. This workshop addresses both through hands-on implementation of persistent memory systems and semantic caching. We'll build agents with long-term memory using LangGraph's recently released 1.0 memory capabilities. You'll implement three memory types: semantic memory for storing structured facts and user preferences, episodic memory for retaining interaction history, and procedural memory for adapting agent behaviors over time. The workshop covers memory architecture patterns, checkpointing strategies, and cross-thread persistence using LangGraph's document store and LangMem SDK. Agent workflows generate significantly higher token usage than single-turn chat applications. To address this, we'll implement semantic caching using Redis LangCache. Participants will leave with working code, architectural patterns for memory-enabled agents, and quantitative approaches to measuring cost optimization in production deployments. Learning Objectives: - Implement stateful agents with short-term and long-term memory using LangGraph checkpointers and stores - Design memory systems incorporating semantic, episodic, and procedural memory patterns - Deploy semantic caching with Redis to reduce LLM API costs and improve response latency Prerequisites: Access to openai or anthropic API key would be helpful Tools/Languages utilized: Docker, python, uv Code Notebooks: https://github.com/EconoBen/langgraph-redis-workshop Slides: https://github.com/EconoBen/langgraph-redis-workshop/tree/main/slides