Loading video player...
Let's be honest. How many amazing AI
agents have you built that are currently
just in Jupiter notebooks, still
existing within local Python scripts, or
even worse, never make it out of those
default web UIs? AI agents are great in
isolation and they work well in default
web UI, but the moment you try to
integrate them into your actual
production application, some kind of
front end or a backend, everything seems
to fall apart. They don't fit. The
problem isn't your agent, it's how it's
architected. Monolithic AI scripts don't
really play nicely with modern
production systems or systems that you
already have that exist. Today, we're
going to fix that. We'll build an AI
system designed from day one to be
integrated. It's going to be a
distributed team of specialists that
your existing app can actually talk to.
So, in this demo series, we're going to
be building a course creator. It's an AI
agent system that will plug directly
into your standard web front end. Behind
this simple UI, there's a whole squad of
AI agents working. First is a researcher
agent who finds the facts. Then is a
judge agent who's surprisingly picky. It
evaluates those facts in a loop until
they're good enough. Once the judge is
happy, it passes everything to the
content builder agent, and that one
writes the final course, which then
streams back to our user. We're using
Google's agent development kit or ADK
for short to build these specialized
brains using patterns like loop agent
and sequential agent. Finally, we're
going to be using agentto aagent
protocol or a toa for short to let them
talk to each other over standard web
protocols. This just means that all of
our agents are microservices. Your app
knows already how to talk to different
microservices. Let's look at the
researcher first. In ADK, it's simple.
Give it a model, Gemini 2.5 for example,
and a tool. Google search. Its
instruction is focused. Find data.
Summarize it. Done. Now, let's take a
look at the judge. This is crucial. If
you're building automated workflows, you
can't have agents replying with maybe or
it depends. You need hard data. ADK lets
us force a contract using paidantic. We
define a judge feedback model that must
return a literal pass or fail. It's
essentially type safety but for your AI.
We attach this output schema to our
agent and it's reliable. Now, it's an
API that happens to speak English
internally. But do they work? We don't
guess, we test. The ADK has a built-in
playground that allows us to do this. I
can feed the judge some terrible
research I just made up and verify it
fails me with structured JSON. It's much
easier to debug this now than when it's
buried in a massive workflow. So to
recap, we built focused agents with
single responsibilities. We use
pideantic schemas to force reliable
structured output from our judge and we
verified them individually before trying
to wire them together. So now that our
team is hired, we need to give them a
way to talk to each other without us
constantly mediating. Bye for now.
Heat.
[Music]
Hey Heat.
Get the code →https://goo.gle/4p63Iqm Agent Development Kit (ADK) →https://goo.gle/4j2EUOx Tired of building AI features that work great locally but collapse when you try to integrate them into your production backend? Time to stop building a single, stressed out monolith agent and hire a specialized squad. Join Amit Maraj as he breaks down how Google's Agent Development Kit (ADK) and Agent2Agent Protocol (A2A) to build complex features with loop agents, sequential agents, and critical judges that communicate seamlessly over web standards, turning any AI logic into reliable, connectable components. More resources: Agent-to-Agent Protocol (A2A) →https://goo.gle/4plOZrH Google Cloud Run →https://goo.gle/3N6Cu5w Gemini 3 models →https://goo.gle/3Y07kiL Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech #AIAgents #GoogleCloud Speaker: Amit Maraj Products Mentioned: Agent Development Kit, Agent2Agent Protocol