Loading video player...
I've got something really exciting that
I've been exploring recently that I want
to share with you. It's all about
creating aic experiences with something
called AGUI. Now, let me set the stage
for you a bit first. There's a good
chance that you're stuck on the idea
that we need to be building apps or SAS
platforms around our AI agents. But that
isn't where the future is headed. The
real shift is embedding agents into our
applications so they become a natural
part of the product experience. And it
seems natural, right? I mean, when the
AI height bubble bursts, the products
that are going to be left over are
generally not going to be the ones
competing as agents, but instead the
ones that deeply integrate AI agents
into a product that delivers its own
unique value. That's what I mean by
agentic experiences and the future of
SAS because these platforms are the ones
that are not going to be wiped out as
our more general agents like chat GP and
operator get more powerful and make the
more niche agents just simply
irrelevant. So right now I'll introduce
you to how to build these kinds of
agentic experiences which are not
trivial by the way generally a lot
harder than building isolated chat bots.
But luckily I have the tech stack for
you to make it simple. So, I'll
introduce you to that, get into some
demos and really important principles,
and then we'll build together. I'll even
take an agent that I've created
previously on my channel and create a
full application around it. So, the
primary part of our tech stack here is a
protocol developed by the C-pilot Kit
team called AGUI. I covered it on my
channel around when it first came out.
Super powerful stuff. Now, the big
question is, what does this actually do
for us? Well, simply put, AGUI
standardizes how front-end applications
connect to AI agents. It's like MCP, but
instead of connecting agents to tools,
we are connecting agents to our
applications in a standard way. It's
fully open source. So, they have this
GitHub repo, which I'll link to in the
description. They have a nice diagram
here that shows how it works at a high
level as well. So we have AGUI kind of
as a middleman here that provides a
standard of communication between our
front-end applications and our AI
agents. And so as long as both have
support for AGUI, which all of these AI
agent frameworks do, then we have the
seamless communication that makes it so
we can build full applications in
hundreds of lines of code instead of
thousands and thousands. And that's what
we'll see as we build together and go
through some of these demos. And so we
just have to pick our front-end library
and our AI agent framework. And then
AGUI takes care of a lot of things. And
so for our front end, we're going to be
using Copilot Kit. This is a React
library that makes it very easy for us
to build userfacing agentic
applications. It integrates natively
with AGUI, of course. And then out of
all the options we have for our AI agent
framework, you know, if you've been
following my channel that paid AI is my
favorite. And they recently added a
direct firstparty integration between
Copilot Kit and Pideantkai through AGUI.
They even have an AGUI doc section in
Pideant AAI. So, I'm super excited for
this. This is actually the catalyst for
me to make this video and I've been
talking to the Copilot Kit team about
this and actually asking for this
integration. So, I'm super excited that
it's finally here because now we can
build these kinds of user interactive
applications with Pyantic AI agents
under the hood driving the whole thing.
And these are the demos that they have
for us that I was talking about earlier.
I'll link to this in the description.
It's a great resource to just explore
the power of AGUI. So, they have all of
these different demos that each
represent something that would very much
not be trivial to build without AGUI.
But with AGUI, it's really easy. You can
even see the code for both the front end
like with copilot kit here and the back
end like I have padantic AI selected
here but you can even just select any AI
agent framework you want and it'll
immediately drop that in in the back end
and then we have to change nothing in
the front end that's what agui gives us
that standard communication so we can
move from langraph to eggno or eggno to
panty and our front end app doesn't have
to change at all so let's actually see
this in action like I want you to see
how this really does unlock a entic
experiences for us. Like in this example
right here, it is a recipe builder and
there's a state sync between our front
end and our backend thanks to AGUI. So
like for example, I can add an
ingredient here and I can say I'm adding
in beef, one pound of beef. And then I
can ask my agent, what are my
ingredients? And it's going to
immediately recognize the changes that I
made in the front end here because we
have that two-way sync. There we go. We
got our ingredients. And then I can do
the other way around. And I can say I
want to make a recipe with a lot of
beef. All right. So now instead of
syncing the front end to the the backend
agent, it's now the agent that's going
to be updating our front end here. So
it's creating the recipe and boom, it
renders out everything in a very
beautiful way in the front end. So we
have our typical chat application on the
right hand side, but then it's deeply
integrated into the rest of the
components of our front end. That's what
I mean by aentic experiences. Really,
really nice. We have our instructions
here. We can actually improve it with AI
as well. We have our list of ingredients
and I can continue to collaborate with
the agent here making changes back and
forth to make my perfect recipe. And
obviously it's just a very simple
example here. But yeah, definitely play
around with these different things like
human in the loop collaborating with our
agent toolbased generative UI. So we can
actually build tools in our front end
that we give into our AI agent
dynamically. There's so many powerful
things here. And like I said, we can
view the code. So we can see exactly how
the front end was built with Copilot Kit
and how the back end was built with the
Pantic AI agent. All right, so now that
you know how AGUI works and what it
really gives us, let's get into actually
building with it. So the easiest way to
get started with building these Aente
experiences with AGUI is to follow this
quick start, which I'll link to in the
description. And so this is the copilot
kit documentation and they have an
entire section for working with
paidantic AI specifically and they do
that for all of their integrations. It's
really really cool. And so to get
started we just have to have mpm
installed on our machine and we can copy
this command here. So I'll actually go
into my terminal and let's walk through
this really quickly. So, it's going to
set up a new project for us using
Copilot Kit as the front end. And then
we're using AGUI here. And we can select
our agent framework. And so, I'll just
say test here because I have something
else set up already that I'll demo for
you. And there we go. We can select our
framework. Like I can say I want to use
Pinantic AI. There we go. And so, it's
going to open up your browser for
authentication because there is a cloud
version of Copilot Kit. We don't
actually have to pay for anything
though. Um, and so we can just get
through this and authenticate and then
I'll come back once I have this done.
Okay, there we go. So, I'm signed in. It
creates a Copilot kit cloud API key as
well if you want to host with them. You
don't have to though. And so, for
demonstration purposes, we're not going
to be doing anything with that right
now, but yeah, they even give us some
documentation here. And so, we can open
this new folder that was created for us.
And so, I have it open right now. And
then there's a readme here that walks
you through the quick start. to how you
can get things set up in both the front
end and the back end. And it's literally
just a couple of commands. Like it's so
easy to get this up and running. And
that's going to give us a quick start
that's going to look kind of like the
demos that we saw here. But now we have
something that is running entirely
locally. And there's a couple of
different things we can try to really
see the power of AGUI. And we can start
to build on top of this application as
well. But I'll just show you really
quickly. Like for example, I can say set
the theme to green. And so this is a
tool that we actually build in the front
end and send into our agent. So our
backend agent, thanks to AGUI, doesn't
even have to know that there's a tool to
change the theme. And we just pass that
in from the front end. I can also say
write a proverb about the difficulties
of SQL. All right. So we'll send this
in. It'll make a proverb for us. And
then boom, we immediately have that
state sync like that other demo I was
showing you where now our front end has
this proverb here. And it's displayed
and rendered out in a nice rain. I can
say write another one. And so we can do
another state sync here. And then I can
delete one of these as well. Like I'll
just delete this first one. And I'll say
I deleted one which is left. Right? So
we can know that the front end is also
syncing back to the back end as well.
And then the last thing that they have
for the demo here. I'm just trying to
like show you the different kinds of
things that we can do with this front
end. It's really, really neat. We can
also render out components in our chat
UI. So, we're not limited to just having
a bland conversation here and then
passing things into the rest of our app.
We can also render out cool things here.
So, I can say, uh, what is the weather
in Orlando, Florida? And it's going to
render out a really nice looking card
here. So we can take tool calls from the
back end and standardize the format like
actually make it look like a nice
component in our React frontend. So
super cool. That's what we have for our
quick demo here. But now let's actually
use this as a starting point to build
our own applications on top. Now to
build on top of this application, we're
not going to dive straight into
implementation. There's one really cool
thing first because the co-pilot kit
team, they have built a vibe coding MCP
server. Now, you know that I'm not the
biggest fan of vibe coding. So, maybe
this isn't the best name for what I'm
about to use it for, but this is an MCP
server that's kind of like archon for
knowledge retrieval. It gives our AI
coding assistant the ability to search
through the co-pilot kit and AGUI
documentation and best practices. So it
becomes our expert guide on our
implementation. Super cool. So if you're
building any kind of agentic experiences
with this tech stack, definitely use
this MCP server. That's what I'm going
to be using right now. And they have
instructions here based on your AI
coding assistant like cursor or client
or windsurf exactly how to hook it up.
Now they don't have instructions here
for claude code specifically. Uh but
I've got that for you right now. So you
can copy this URL right here and then go
into your editor. I have my terminal
open here and I'll just paste the
command right here. So it's claude MCP
ad and then SSE is their transport and
then you just can call this server
whatever you want and then the SSSE URL
is the one that we copied from the
documentation. So I'll add that in here
and then I can also do a claude MCP list
to test the connection and so this is a
remote MCP server completely free to
use. Now Claude Code is able to search
the C-pilot kit documentation. Super
cool. Okay, so now armed with this MCP
server, we can now build any kind of
agentic experience that we want on top
of this starter template. It provides a
really good foundation for us. And so
yeah, I'll send in a request here. I'll
show you what it looks like to build on
top. And then I'll even show you how far
I went taking a rag AI agent that I
built with Podantic AI on my channel
previously, and I'll show you how I
built a full application around it with
a GUI. So, we'll get there in a second,
but right now I just want to send in a
simple request here. I'll and first I'll
actually tell it to use the uh copilot
kit MCP server to understand how to
build this feature. And maybe I don't
have to call that out explicitly, but I
just want to to make sure that it
leverages the MCP. So, what I want to do
here, if I go within the source and
page.tsx,
one of the things we have here is the
co-pilot kit action. This is our
front-end tool that we're passing into
our agent to give it the ability to
change the theme. And so I want to add
another tool here to do something else.
Let's say maybe clear all the proverbs
that we have on the front end. I think
that's a good example here. So I'll say
I want to make another uh front-end tool
with co-pilot kit to clear the proverbs
that we currently have. And so we since
we have that state sync as well, the
agent will immediately recognize that
everything is cleared in the front end.
And so I'm going to go ahead and send
this in right now. And it should after
maybe looking at some of the files we
have in um our starter template here or
actually yeah right away it searches the
copilot kit docs. Take a look at that.
Use copilot kit action custom actions
front end. And we can do control O to
expand and actually see the chunks or
the snippets as they call them that were
retrieved from this rag implementation.
It's really, really cool. So, we're
pulling in the documentation from
C-pilot kit. That's a part of our
context engineering here, giving our
coding assistant all the information it
needs to actually build out the
implementation here. Now, this is a very
simple example. And so, it could
probably just look at what we already
have for a front-end tool and then
implement that as well. But I hope that
you can see that like if we didn't have
any front-end tools at all, it would
have no idea how to use use copilot kit
action. So, it would need to search
through the documentation like we are
doing here. And so, yeah, let me
actually scroll down. Um, let me exit
out of this. There we go. All right.
Yeah, there we go. It is adding in a
clear all proverb tool. And so, when
this is invoked from our agent, it's
going to yeah, just clear the proverbs
in the state that we have and that'll
immediately be synced to the backend as
well. Looking really, really nice. So,
cool. Yeah, I'll pause and come back
once this is fully fully implemented.
There we go. I just paused for like 20
seconds and then we are done. So we have
that new action and then we just have
the button that it added for us to be
able to clear things right here. So
there we go. We got our updated UI. So
now I can say add a bunch of proverbs.
All right. So we'll have to populate it
first obviously. So the agent has
generated some proverbs. There we go. We
have Oh wow. It is adding a lot. All
right. So we added a lot of proverbs
here. And now I can clear all of them.
And so I'm going to send this in. Click
this button here. Or I could just ask it
to clear the proverbs. So, well, I'm
actually going to click this button
here, clear all proverbs. Uh, there we
go. And now we have that state sync. And
so, I can say, what proverbs do I have?
And it's going to say that there are
none. There we go. It is currently
empty. And I can also show you. So, I'm
going to say add some more. I'll also
show you that like I can have the agent
clear them as well. So, it doesn't just
have to be this button. So, I can say
clear the proverbs. And so just based on
a simple text request to our agent, we
can do the same thing that that button
did. And so I hope that this just helps
you see how our agent is starting to
interact with the website. We can build
these agentic experiences where AI
agents can actually help us navigate
through a website as well. Like this is
a really simple example where it
performs an action of a button click,
but this could be a full onboarding
experience where the agent is actually
like walking me through clicking on
different buttons in my app depending on
what I'm talking to it about. Like, oh,
I have this question and then it's like,
okay, well, let me click on this button
and then highlight this section of a
website. Like, there's just so many
things that we can do with this. And so,
that that's like the higher level
principles that I wanted to be speaking
to in this video. Like it's cool to like
see AGUI and Panti specifically, but
like the most important thing that I'm
trying to get across here is the general
principles of like we need a way to sync
the state between the front end and the
back end. The agent needs to know like
where we're currently at in our front
end. And that is what AGUI makes really
easy for us along with the fact of
course that our backend is so incredibly
simple thanks to AGUI. Generally, you'd
have to build an entire backend with API
endpoints and middleware and everything.
But our backend right now is literally
just a single file. We have our
agent.py. And so this is our Pantic AI
agent where we have our agent defined.
We're giving it some tools. It's going
to look very similar for uh other
frameworks as well. We have our primary
system prompt. And then the only thing
we have to do to turn this padantic AI
agent into a fully working API endpoint
compatible with AGUI and our C-pilot kit
front end is to call this two AGUI
function. So so easy. And so now our
agent is running on port 8000 and it is
good to go. And we have everything
handled here with the state syncing and
conversation history. We can even
dynamically adjust the system prompt
based on the front end. There's so much
integration that we have here for
literally 100 lines of code. like this
is just so so easy. And so there are a
lot of things that are made possible
thanks to AGUI. Not because we couldn't
build it without the protocol, but
because it just makes it so easy to do
so. And it's the same kind of thing with
like the model context protocol. I mean,
we can connect agents to any tools, but
MCP just makes it very easy and
accessible. That's what we have here.
And like I said I would cover, I even
went as far as to take an existing
Pantic AI agent that I built on my
channel previously and create a full
application around it with AGUI and
Copilot Kit. And that's what you're
looking at right here. And so I'll link
to the video right here where I built
this rag agent. I built it with cloud
code and a team of sub agents. And in
that video we were just using the
command line, a simple CLI tool to talk
to our agent. So it was a perfect
opportunity to take an existing agent
and barely having to change the code at
all thanks to the padantic AI and AGUI
integration to build out this full app.
And so now I can ask it a question. It's
a rag agent. So we have this knowledge
base with all this information about AI
startup. So I can ask it about OpenAI's
funding for example. And we'll get the
answer on the right hand side like a
normal chatbot. It's got streaming and
conversation history and everything. But
then also it takes all of the chunks
that we returned from the knowledge base
and it populates the front end with
them. So we have this super interactive
rag agent where we can actually like
view the chunks that it retrieved. We
can see the match percentage. We can
click in to view the contents of the
chunks and all the metadata like the
document that it came from. I mean this
is definitely taking a typical rag agent
that you would have in just a chat
interface. taking it to the next level
where we can actually see under the hood
what information it's using to give us
the answer that we have in the chat box.
So really really cool and man was it
easy to build this out. I'll just show
you the code really quickly here. I had
to define some classes here. That's for
the state sync so that the chunks that
we have in the front end is synced with
what the agent knows it's displaying.
And then I defined my agent very very
much the same way like I always do with
padantic AI giving it tools as well to
search my knowledge base both with
semantic search and hybrid search like a
keyword search. And so I'll scroll down
here. We have the system prompt as well.
Um this is also dynamic. So all of the
chunks that we have as a part of that
state sync through agui we're passing
those in as a part of the system prompt.
And then all I have to do just like our
starter template is call the rag agent.2
to agui and now we have the full backend
infrastructure spun up with the agent
running on a port 8000. Absolutely
beautiful. And we could swap this out
for a langraph agent and not even have
to change the front end at all. That is
the beauty of AGUI. And to build all of
this, all I did is I followed the quick
start here in the copilot kit
documentation for Pantic AI. They have
instructions specifically for how you
can take an existing Pantic AI agent and
turn it into something that's compatible
with AGUI. So I I use this as a
reference point. And then I went into
Cloud Code. I took that starter template
that I already showed you building on
top of, put in the code for my existing
agent, hooked in the vibe coding MCP
server and just had it go ham. So yeah,
using the MCP for documentation and then
the existing template to build on top
of, I just told it exactly what I
wanted. I want you to take the copilot
kit application, make it work with this
rag agent so I can actually see the
chunks and all the metadata and it
knocked it out of the park. It was so
easy to build this out. So there you
have it. That is everything you need to
know to get started building agentic
experiences with AGUI. Now here's the
thing. Pyantic AI, AGUI, Copilot Kit,
these are just tools to get the job
done. The higher level principles are
what I really want you to focus on.
things that I covered here like human in
the loop and front-end tools and state
syncing. That's what you really need to
make these agentic experiences. AGUI is
just the protocol to standardize things
and make it that much easier to build
out this kind of application. Like I
literally built this out in like a half
hour after I brought in my existing
agent. It was so easy to do this with
the help of an AI coding assistant. So
there you go. If you appreciated this
video and you're looking forward to more
things AI agents and AI coding, I would
really appreciate a like and a
subscribe.
Support AG-UI by giving them a star on GitHub! https://github.com/ag-ui-protocol/ag-ui There’s a good chance you’re stuck on the idea of building apps or SaaS platforms around agents - but that’s not where the future is heading. The real shift is embedding agents *into* your applications so they become a natural part of the product experience. When the AI hype bubble bursts, what’s going to be left over are the products that are not competing as agents - instead the agent(s) are deeply woven inside a product that delivers its own unique value. In this video, I’ll show you how to build these kinds of applications and the tech stack that makes it simple. It all starts with AG-UI, the protocol that standardizes how AI agents connect to applications. You can use any frontend client that supports AG-UI - which I’ll use CopilotKit. And you can pair it with any AI agent framework - which of course I’ll use Pydantic AI here since it’s my favorite! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - If you want to completely explode your productivity with AI coding assistants in a single session, check out this exclusive workshop I'm hosting soon (limited spots available!): https://dynamous.ai/ai-coding-workshop - Pydantic AI + AG-UI + CopilotKit RAG Agent: https://github.com/coleam00/ottomator-agents/tree/main/ag-ui-rag-agent - AG-UI Demos I showed in the video (Special thanks to them for working with me on this video!): https://dojo.ag-ui.com/pydantic-ai - Pydantic AI docs for AG-UI: https://ai.pydantic.dev/ag-ui/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 00:00 - The Future of SaaS is Agentic Experiences 01:23 - Introducing AG-UI - Connecting AI Agents to Apps 02:49 - Pydantic AI + AG-UI Integration 03:26 - Agentic Experiences Demo 05:51 - AG-UI Quickstart w/ CopilotKit + Pydantic AI 09:16 - CopilotKit Vibe Coding MCP 10:59 - Improving Our AG-UI App 14:34 - The Principles of Agentic Experiences 16:49 - Building a RAG Agent App with AG-UI 19:56 - Final Thoughts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Join me as I push the limits of what is possible with AI. I'll be uploading videos weekly - Wednesdays at 7:00 PM CDT!