Loading video player...
OpenAI just released the agent builder.
This makes it easy to build complex
agentic workflows by dragging nodes onto
a canvas. So in this crash course,
you'll learn everything there is to know
to get up and running with the agent
builder. You'll learn how to build a
customerf facing assistant which you can
embed into any application using chat
kit. So for example, we can click on
this little chat button and we can start
interacting with our agent. Then we'll
have a deeper look at the agent builder
by building this deep research agent.
And this uses a lot of very cool
features like agents, setting state,
human in the loop approval, running
agents in a loop, and we'll cover some
other cool features along the way. But
first, let's have a look at what the
agent builder is, and more importantly,
what it is not. I've seen a lot of
content saying this is the Zapier killer
or the N8N killer and this can't be
further from the truth. Whereas N8N and
Zapier are process automation tools with
some AI functionality, the agent builder
puts more of an emphasis on building
agent workflows using the agent SDK from
OpenAI. It forms part of what is called
the agent kit. And the agent kit
includes a whole bunch of features that
developers can use to build a gentic
solutions in their applications. Agent
kit comes with the agent builder which
is a visual tool that we can use to
prototype our agent workflows. It also
comes with a connector registry which is
a central place for admins to manage how
data and tools connect to OpenAI
products. And finally, it comes with
chat kit, which is a tool for embedding
these agents into your products. And
that is what we use to embed the agent
into this website. I think a lot of
creators are creating a misperception of
what this tool is intended for by
calling it the Zapia or N killer or that
this tool offers some integration
capabilities. It's really up to the
developer to export these workflows and
then add those into their applications.
As a really simple example, if we assign
a tool like the get weather tool, we can
of course provide the JSON schema that
the agent will use to call this tool,
but it will still be up to the developer
to actually implement the execution
logic for this tool call. This will not
be handled by the agent builder. So, if
you wanted a very simple way to
prototype agent SDKbased applications,
then this tool is perfect for you. And
in this video, you'll learn all the ins
and outs of using it. To start, go to
platform.openai.com/agentbuilder.
Initially, you won't see any workflows.
So, let's create one. Let's click on
create. And this will give you this very
simple canvas with a start node and an
agent node. On the left hand side, we
can see all of the available nodes.
Then, in the top right, we have a few
options like we can duplicate or rename
these workflows. I'm actually going to
rename this one right now. So let's call
this agent basics and let's save this.
Then we can also run evaluations against
this agents. If we click on code, we can
see the different ways of integrating
this workflow into our project. We can
use chatkit to embed the agent into
something like a website. And if you're
familiar with using the agent SDK, this
will generate all the behindthescenes
code for you. So all you really have to
do is copy this code. Of course, you can
switch between Typescript and Python,
but you can simply copy this code and
add it to your project. And of course,
if you're using Aentic Coding, you can
simply take this code and pass it to
something like Claw Code or Bolt or
Lovable and ask it to add this agent to
your project. The preview button will
allow us to test out this workflow. And
once we're done, we can go ahead and
publish our workflow, which will allow
us to use it in our applications. By
default, we start off with a start node
and an agent node. We can delete a node
either by clicking on it and then
clicking on the trash icon behind my
face. Alternatively, we can just press
backspace and that will remove the node
as well. Now, let's have a look at the
start node. In the start node, we
receive one input variable called input
as text. We will be able to reference
this variable anywhere in our workflow.
So whenever we want to see the original
message that was passed by the user, we
can also add global state variables in
this state variable section. So let's
click on add and for instance let's add
a new variable called name and let's add
a value called Leon. Then let's save
this. So with this single note, let's
actually click on preview and let's just
say hey. Of course this is not very
helpful. We can see the start node was
called but no response is being written
out in the chat window. If we wanted to
receive a response in a structured
output, what we can do is add an end
node. And by the way, there are a few
ways in which we can add nodes. We can
drag and drop it like I just did or we
can actually just grab the output from
this node and just drop this on the
canvas. And then let's select our end
node. Cool. Now this end node is
optional. If the final node in your
workflow is an agent, then you really
don't need an end node or simply stream
back the response from the agent. But if
you would like to return structured
output or see additional values that
were generated in your workflow, then
the end node is perfect for that. Let me
show you. So we can add an output schema
to this. Then let's give this a name
like response. And now we can define the
structure in two very different ways.
Either we can add properties using this
UI or we can go to advanced mode and
simply drop in a JSON schema because
this is so simple. I'm just going to add
a new property and let's call this name.
We also need to assign a type and this
is of type string. And then for the
value we can of course just hardcode
something or we can change this to
select and this will allow us to select
from any of the variables in this
workflow. So remember we had that input
as text variable which is basically the
input message from the chat window or we
can also grab our state variables like
the name property. Of course we can add
more properties if we want or we can
click on update and if we run preview
now let's say hey we can see the start
node executed followed by the end node
and then we get the response in the
structured format. So we get the name
back as Leon which is coming from our
state variables. We will be using state
quite a lot during these workflows.
Right? Let's move on to building our
first agent. So let's drag in a new
node. Let's select agent. And now we can
rename the agent to something like
assistant. And now we can enter the
system prompt. So I'm actually going to
expand this. And it say you are a
helpful assistant called John. And then
I'm just going to add some context. So
we can say you are chatting to. And now
I want to refer to that state variable
containing my name. So there's a couple
of ways to do that. The more technical
approach is to enter double curly
braces. And this will bring up all the
variables in the workflow. Or an easier
way is to click on add context and then
selecting that variable. Cool. Let's
also say always refer to the user by
their name. Cool. Let's save this. And
if we wanted to, we can actually add
more messages. By default, this will add
a user message, but we can switch the
role to assistant as well. And this way,
we can maybe do something like few shot
prompting where we can simulate the back
and forth between the user and the
assistant. I'm going to remove this.
Then we have the option to include the
entire chat history in the conversation.
I'll just leave this enabled. And of
course, we can select our model. So,
I'll just select GPT5 Mini. And these
models are limited to OpenAI's models.
Again, this is using the Agent SDK from
OpenAI. And this leans heavily into
using the OpenAI ecosystem. And that's
why it's so painful to see other
creators call this a Zapier or N8N or
FlowWise killer because this is not
aimed at that audience at all. This is
really a wrapper around the agent SDK.
Anyway, let's move on to reasoning
effort. I'll leave this at low. And then
we can also assign tools. So under
tools, we have a few options. We can
integrate with chatkit client tools, MCP
servers, file search, web search, and we
even have access to a code interpreter.
We can also define local functions or
custom tools. For this, let's actually
add web search. In here, we can specify
specific websites if you wanted to
narrow the search. And of course, we can
add localization details like the
country, time zones, etc. I'm just going
to leave everything on blank and click
on add. All right, cool. Let's go ahead
and preview this agent. And let's say,
hey, and look at that. The agent is
greeting me using my name. And let's ask
it to do a web search like what is the
latest news from Open AI.
All right, we can see the assistant is
running and it is indeed performing a
web search and finally we get our search
results. Awesome. Right, before we move
on to implementing the research agent,
let me show you how you can now
integrate this simple assistant into any
website. Well, the first thing we need
to do is click on publish. Then let's
give our agent a name and click on
publish again. This will give you this
web flow ID. Now, I also want to mention
that the version changed from draft to
version one. And that's a really cool
feature about agent builder. It already
includes version control. So, of course,
we can deploy multiple versions after
this and roll back to a previous version
if needed. Either way, back to code. We
can choose to embed this using chatkit
or the agents SDK. Now, because I want
to embed this chatbot into my website,
I'll simply go to chatkit. Optionally,
we can provide the domain from which
this workflow will be accessed. This
will prevent other people from
integrating our agents into their
projects. Since we live in the era of
agentic coding, adding this to your
project is really simple. The first
thing you'll need is your OpenAI API
key. You can get that from
platform.opai.com/api
keys. Then, let's create a new key. I'll
give it a name like agent builder
tutorial. Let's click on create secret
key and let's copy this. And now simply
make a note of it as you will have to
pass it to your coding agent. If you are
using something like lovable or bolt
since I'm building a local Nex.js
project, I'm simply going to create a
new env file. Then in this file, I'll
create a new variable called OpenAI API
key and then pass in that key. Now we
also want to note down this workflow ID.
You can pass that to your coding agent
or if you're following along locally in
av file call it something like open AAI
chatkit workflow ID which is equal to
this workflow ID. Cool. So now we can
close this file of course. Now we can
spin up our agent which could be bold or
lovable or locally it could be cursor
cloth codeex
whatever it really doesn't matter. I'll
simply spin up claw code. And by the
way, if you want to learn all the ins
and outs of learning Claude code, then
check out my Claude code masterclass
video over here, also link it in the
description. Ideally, we want to give
our coding agent a little bit more
context on what chatkit actually is and
how to use it. So, what I recommend is
creating a new folder somewhere. I'll
just call it docs. And within this, I'll
create a subfolder called OpenAI. And
within this a file called chatkit.md.
Then I'll also link to this page in the
description. These are the instructions
for embedding chatkit. So it gives us
all of this example code. So it gives us
the backend logic as well as the logic
for the front end. Now all we have to do
is copy this page and paste it into this
file. Now let's tell our agent I would
like to embed my agent workflow into
this app. Below you will find the
documentation on implementing chatkit.
The open AAI API key and the workflow ID
are available in the ENV file as I'll
just grab these names. Then let's also
say this should be accessed using a
bubble on the bottom right of the screen
that will open and close this chat
window. And that's really it. I can just
leave this on thinking on and fire off
this instruction. And I just noticed I
never actually included the
documentation. So what I'll do is simply
grab the docs that I stored and add it
to the chat window or as I mentioned you
can simply paste all the content from
the website directly into the chat
window. So now that we have this context
we can send this and the agent should be
able to implement this solution and
afterwards we'll have something like
this where you have this button where we
can open and close the chat window and
of course we can interact with our
agent. So let's say hey and we get our
response back from our agent workflow.
Cool. Now before we move off from this
simple workflow, there is one more
feature I want to show you and that is
this guardrails node. So when users are
interacting with your agent, you might
want to first look at the content coming
in to perform some moderation on it. For
example, we might want to identify
personal information or block harmful
content using moderation. In fact, let's
enable this toggle. Then we can break
this connection between these two nodes
by clicking on this line, also called an
edge, and then simply deleting it. Then
let's connect the start node to
guardrails. And then guardrails will
look at the message from the chat
window. And by the way, we can change
the variable that we want to look at.
But in this case, we just want to look
at the message from the chat window and
see if there's any issue with it. If
moderation fails, we'll go down this
fail path. So I'll just connect an end
node. So basically, we'll just
immediately terminate the workflow. But
of course, you can trigger another agent
that could provide a message back to the
user saying something like, "I'm sorry,
but I can't help you with that." Or if
we pass moderation, we'll move on to the
agent. Next, we'll build something a lot
more complex. will build a deep research
agent that will really teach you the
fundamentals of using the agent builder.
Right? So, let's create a new workflow.
And how this will work is a user will
provide a topic in the chat window. And
we then want our agent to come up with
several keywords to research. And then
our deep research agent will actually
perform research on each of those
keywords. And finally, we'll spit out a
detailed report covering all of the
research data. On the start node, let's
add a new state variable and we'll call
this topic. And by default, the topic
will be empty. Let's save this. Then I'm
going to delete this agent node. And
actually, let's also rename this node to
deep research demo. Cool. So what will
happen is the user will provide some
topic like what are the differences
between GPT5, GPT5 nano and GPT5 mini.
The first thing I want to do is when the
user sends this, we want to store this
topic in our state variable. So what
we'll do is after start we'll add a new
node and let's add the set state node.
And you'll see me use the set state node
quite a few times during this tutorial.
In the agent builder, everything really
relies on you setting and retrieving
state. So let's click on this node and
what we'll do is assign a value. And
this is really easy. We'll simply grab
the input from the chat window and we'll
assign it to our topic state variable.
Now that we have our topic, we want to
use an agent to come up with different
keywords that we can use to research
this topic even further. So, of course,
let's add an agent node. And then let's
rename this agent to keyword agent. And
under the instructions, let's say your
role is to generate three keyword
phrases based on the provided topic.
This will be used to perform further
research into the topic only return the
list of keywords. And of course, we'll
provide our topic. And to get that topic
variable, all we have to do is click on
add context and select topic from our
state. I actually also want this value
to be dynamic. So I'm going to go back
to start. Let's add a new variable. This
time we'll add a number. And let's call
this number of keywords. And then for
the default value, let's make it three.
Let's save this. So then back in our
agent, let's replace this three with a
variable. So let's click on add context.
Let's grab number of keywords. I'm just
going to cut it from the bottom and add
it over here. So now we have a dynamic
value that we can adjust using the start
node. All right, let's save this. For
the model, I'll change it to GPT5
mini with low reasoning. That should be
fine. I do want to change the output
though. Instead of the agent returning
text, we actually want an array of
values so that we can iterate through
that list when we're doing our research.
So under output format, let's change it
from text to JSON. Then let's add a
schema. And again, we can try to
manually build the schema using this UI.
But to keep things interesting, let's
change this from simple to advanced. And
now we can ask the agent builder to
create a schema for us. So under
generate, let's say the agent should
respond with a list of keyword phrases
as text. Let's click on create. And this
returns this object which contains an
array of keywords. Let's update this.
And of course, we can have a look at it
at any time. So we can see we get this
array back containing keywords. Cool.
Let's run this in preview. And let's
send our topic. We can see our agent is
running and it is indeed responding with
this object containing an array of
keywords. Cool. Another useful feature
is we do have observability with these
agents. You will notice that below each
response we get this link and if we
click on this these logs will show us
exactly what happened behind the scenes
and indeed we can see we received this
JSON object. Right? Let's continue. So
now that we have these keywords, we want
to save these results in our state. So
again, we'll go back to our start node.
Let's add another property and let's
call this one keywords. This will be of
type object. And of course, now we have
to add our schema. And you might be
wondering what should the schema look
like? Well, thankfully that's really
simple. We can simply match the schema
to whatever this response schema from
the agent is. So, we could try to
manually map it like this or we can go
to advanced. Let's just copy all of
this. Then, let's go back to start.
Let's add our property. Let's select
object. I'm just going to call this
keywords. Let's add our schema. Let's
click on advanced and let's paste in
that schema. Cool. Let's click on
update. And now should be really easy to
map the response from the agent to the
state variable. So after the agent,
we'll again add our set state node. So
let's first select our variable from our
global state. We want to grab this
keywords object. And for the value,
let's grab the keywords array within
output par. I'm just going to run
preview to make sure everything is still
working. So let's run this. And we were
able to execute set state without any
error messages. Of course, if we wanted
to see exactly what happened in set
state, we can actually simply add an end
node. Then under schema under
properties, I'm just going to add
keywords which is of type array. And
then for the value, let's select select
and let's grab our keywords array from
state. Let's update this. And now in the
preview window, we should be able to see
the list of keywords from our workflow
state, which we do. So now that we have
our list of keywords, we want to iterate
over each phrase to perform a web
search. So how do we iterate over these
values? Well, what we can do is add this
while node. Then anything we add into
this node will be executed in a loop
until a certain condition is met. So
let's actually attach this. And now what
we want to do is add an expression that
will say that this needs to run for
however many items we have in our
keywords array. Now this is also a good
point to mention that the agent builder
relies heavily on the common expression
language. So we can click on this link
to learn more. So we can see a few
examples of what these expressions
actually look like. But to be quite
honest, JPT is really well trained on
this already. So if you ever wanted to
learn how to write any of these
expressions, you can just go to chat GPT
and say, "Hey, I've got an array of
values." Use common expression language
to loop through those items. So what
we'll do is we'll keep track of the
current iteration. And we'll increase
that value until it's equal or less than
the maximum amount of items in our
array. It's similar to a for loop if
you're familiar with programming. So
what we can do is go to our start node.
Let's add another value to state. This
will be a number and let's call this
current
iteration. By default, this will be
zero. Then let's save this. Then let's
click on our while node. And our
expression will be state dot current
iteration. And now we can see a bunch of
comparisons. So we'll just say as long
as it's less than the number of
keywords. So at this stage we know that
the number of keywords for P3 and our
starting position is zero. And the
reason we're starting at zero is the
zero position in the array will contain
the first value. Either way, just know
this will run three times. The first
thing we want to do in this loop is to
grab the keyword that we're currently
on. So to do that, we can add the set
state node. And now we have to assign
some value to some variable. Again we'll
just add another state value. So click
on start. Click on add. And this will be
a string. And what we can call it is
current keyword. And by default this
value will be blank. Let's save this.
Then let's go to this set state node and
let's select it from the list of
variables. Now for the value we'll
simply use the common expression
language again. And again I simply use
chat GPT to generate these expressions
for me. But what we can do is say state
dot keywords. And now we can specify
which record we want to grab from this
array by adding these square brackets.
And we can provide a number like zero
which will grab the first record in the
list. Or if we provide one, this will
grab the second record and for a value
of two, we'll grab the record in the
third position, etc. But of course, we
want this to be dynamic. We want to grab
the value based on the current iteration
that we're on. So again, we're also
keeping track of that in state. So let's
call state dot and it was called current
iteration. Great. So now this is
dynamically grabbing the values and
restoring the current keyword in this
state. So the next step is to now pass
this keyword to an agent to go and
perform research based on that keyword.
So I'm just going to break this edge.
Then let's add a new agent. And then for
the system prompt, let's say your role
is to perform deep research into a topic
using a specific keyword. Use the web
search tool to retrieve the three most
relevant articles related to the
keyword. Return a detailed search report
in markdown format. Then I'm just
passing in the topic as well. And of
course the keyword that we're currently
on. Let's save this. Then under tools,
let's add a web search tool. I'm not
going to specify anything here. Let's
just click on add. And that should be
good enough actually. Now at the moment
this is actually going to run in an
infinite loop because this while will
only exit if the current iteration is
less than the number of keywords. So
what we need to do is increase this
value every time we run this loop. So
after the agent node let's run set state
and then for the value we'll say let's
grab the current iteration and we'll
simply add one to this value. And cool,
we should be able to give this a spin.
So, let's go to preview. Let's send our
prompt again. So, we have our list of
keywords. We can see the while is
executing and we're running our first
agent. The agent is currently searching
the web as well. And now the first agent
is generating its results. Now that that
agent's done, we've actually moved on to
the second agent. And of course, that
agent is now performing its research.
And we should see the results in a
second. You know what would be really
cool if we could approve those keywords
before actually moving on with this
research phase. The research phase can
take some time to execute because it's
going online. It's retrieving articles.
So a really cool optimization would be
to add some human in the loop
functionality here. So before we call
this loop, I'm actually going to break
this connection. Then let's add user
approval. So I'm going to connect this
note to user approval and only if the
user approves the step will we move on
to this research phase. If they reject
this we'll simply end this workflow. So
under user approval let's say something
like would you like to proceed with
these keywords. All right then let's
open preview. Let's run our query. And
this time the workflow actually stopped
at the user approval node. So we can see
our keywords and now it's saying would
you like to proceed with these keywords
or not. So if we reject this this
workflow will simply terminate. And I do
want to mention you can actually inject
dynamic values in this plant as well. So
if you wanted to refer to those keywords
you can definitely do that as well. So
we can simply grab this value and inject
it into this text. All right. Our
workflow is looking really cool. So the
next step is to add one more agent. And
we'll just say your role is to
consolidate all the research that the
agents created before you to build out a
detailed and structured report for the
user based on their topic. So it could
look something like this. Then of course
let's add our topic into this. And
because this agent has a view of the
chat history, it's able to see all the
results generated by the agents before
it and that will allow it to generate
this final report. So, I really hope you
enjoyed this video. This is simply a
crash course and there's a lot more we
can do with OpenAI's agent builder. So,
please let me know in the comments if
you would like me to create more videos
on the agent builder and what specific
topics you would like me to cover. Also
remember to like this video and to
subscribe to my channel to stay upto
date with more ancient builder videos.
Also check out this other video and I'll
see you in the next one. Bye-bye.
š Access ALL video resources & get personalized help in my community: https://www.skool.com/agentic-labs/about?ref=3fd61190e13d426dbf4f3b38adc7de69 Learn how to use OpenAI's Agent Builder in this comprehensive crash course. The Agent Builder makes it easy to create complex agentic workflows using a visual canvas interface. You'll discover how to build customer-facing assistants, implement human-in-the-loop approvals, and create a deep research agent that performs web searches and generates detailed reports. This tutorial covers the Agent Kit ecosystem including ChatKit for embedding agents into applications, state management, while loops, and integration with Next.js projects using the Agent SDK. š ACCESS AGENT BUILDER: https://platform.openai.com/agent-builder š CHATKIT DOCUMENTATION: https://platform.openai.com/docs/guides/chatkit š CLAUDE CODE MASTERCLASS: https://youtu.be/50tzzaOvcO0 š SUPPORT THE CHANNEL: ā Buy me a coffee: https://www.buymeacoffee.com/leonvanzyl š° PayPal: https://www.paypal.com/ncp/payment/EKRQ8QSGV6CWW š CONNECT: š Subscribe for weekly AI automation tutorials š¦ Follow on Twitter: https://x.com/leonvz ā° TIMESTAMPS: 00:00 - OpenAI Agent Builder introduction 00:43 - What Agent Builder is (and isn't) - Not a Zapier killer 01:41 - Agent Kit components: Builder, Connector Registry, ChatKit 02:38 - Creating your first workflow and canvas overview 03:35 - Renaming and managing workflows 04:11 - Understanding the Start node and Agent node 05:00 - End node and structured output setup 06:50 - Building your first agent with system prompts 08:00 - Adding web search tools to agents 09:20 - Publishing workflows and ChatKit integration 10:42 - Setting up OpenAI API keys and environment variables 11:47 - Using agentic coding tools (Claude Code, Bolt, Lovable) 13:29 - Embedding agents into Next.js applications 14:00 - Guard rails node for content moderation 15:03 - Building a deep research agent (advanced project) 16:00 - State management and variables 17:16 - Keyword agent with JSON structured output 19:03 - While loops for iteration 22:00 - Common Expression Language (CEL) basics 24:00 - Dynamic keyword processing in loops 26:00 - Human-in-the-loop approval workflow 27:20 - Final report consolidation agent #agentbuilder #openai #aiautomation