Loading video player...
How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test, and deploy these things? The industry is clearly in exploration mode—we're seeing good ideas implemented badly and expensive mistakes made at scale. But Google needs to get this right more than most companies, because AI is both their biggest opportunity and an existential threat to their search-based business model. Christina Lin from Google joins us to discuss Agent Development Kit (ADK), Google's open-source Python framework for building agentic pipelines. We dig into the fundamental question of when agent pipelines make sense versus traditional code, exploring concepts like separation of concerns for agents, tool calling versus MCP servers, Google's grounding feature for citation-backed responses, and agent memory management. Christina explains A2A (Agent-to-Agent), Google's protocol for distributed agent communication that could replace both LangChain and MCP. We also cover practical concerns like debugging agent workflows, evaluation strategies, and how to think about deploying agents to production. If you're trying to figure out when AI belongs in your processing pipeline, how to structure agent systems, or whether frameworks like ADK solve real problems versus creating new complexity, this episode breaks down Google's approach to making agentic systems practical for production use. -- Support Developer Voices on Patreon: https://patreon.com/DeveloperVoices Support Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/join Google Agent Development Kit Announcement: https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/ ADK on GitHub: https://google.github.io/adk-docs/ Google Gemini: https://ai.google.dev/gemini-api Google Vertex AI: https://cloud.google.com/vertex-ai Google AI Studio: https://aistudio.google.com/ Google Grounding with Google Search: https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview Model Context Protocol (MCP): https://modelcontextprotocol.io/ Anthropic MCP Servers: https://github.com/modelcontextprotocol/servers LangChain: https://www.langchain.com/ Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.social Kris on Mastodon: http://mastodon.social/@krisajenkins Kris on LinkedIn: https://www.linkedin.com/in/krisjenkins/ -- 0:00 Intro 2:48 Working at Google on AI Innovation 6:00 Google's AI Leadership and Responsible AI 9:34 What Is an Agentic Pipeline? 13:00 Building Agent Toolkits with ADK 15:00 Understanding the Agent Architecture 19:31 How Agents Discover and Use Tools 23:48 Parameter Extraction and Tool Execution 27:00 Structured vs Natural Language Outputs 29:00 Using Grounding for Real-Time Data 32:00 Managing Token Costs and Context Limits 35:42 When Not to Use LLMs 37:00 The Challenge of Edge Cases with LLMs 40:00 Testing Agentic Systems 42:00 Defining Test Criteria for Agents 44:58 Running Test Suites and Evaluations 47:06 Deploying Agents Like Python Apps 50:33 Building Safety Guardrails for Agents 53:00 Authentication and Authorization for Agents 55:09 MCP vs A2 Protocols 58:00 Agent Discovery and Communication 1:04:14 Outro