Loading video player...
Hi there, this is Christian from
Langchain. Everyone is using coding
agents these days. Apps like cursor,
winds surf or copilot have changed the
way we build application. All of these
apps have one thing in common though.
Sooner or later, a human will step in.
There's always a moment where we as a
developer want to step in and review and
revise the steps that the agent is about
to take. Especially when it comes to
taking critical actions like removing a
file. So in this video, we're going to
build a longchain agents right into an
X.js JS application that helps us to
send emails to our customers. We're
going to add a human in the loop
middleware that will help us to revise
the email drafted by our agents before
it's sent out to the customers. So,
let's dive right in.
Before we dive into the code, let's
recap how agents actually work. An agent
runs in a continuous reasoning loop. It
starts with an input, something like a
user message or prompt, and then cycles
through four main steps. First, the
model reasons about what to do. Second,
it decides whether to call a tool like
fetching data or composing an email. And
three, it the tool gets executed and
returns a result. Lastly, the model
observes that results, reasons again,
and either takes another action or
finishes with a final output. This
reason, act observe pattern is what
allows agents to handle complex
multi-step workflows from retrieving
information to performing real world
actions.
In some of these cases, we don't want
the agent to act completely on its own,
though. For example, if it's about to
send an email, post to an API, or make a
system change. We might want the human
to review the action before it runs.
That's where human loop or hit comes in.
It's a middleware layer that intercepts
a tool calls and adds a review step
inside an agent's reasoning loop. Here's
what happens when the agent proposes do
call that matches a rule you define. Say
the tool is called send email. The
middleware raises an interrupt. The
agent pauses its execution, saves the
current state and waits for the human
input.
At this point, the human reviewer can
respond in three ways. We call these
decision types. Uh we can approve, which
means we allow the tool call to run
exactly as proposed. We can edit, which
means we modify the tool call before it
executes, for example, to fix a
recipient email address or tweak the
email body. or we can reject the
proposal and deny the action and send
feedback to the agent explaining why it
shouldn't proceed uh with its current
draft. Once the decision is made, the
agent resumes exactly where it left off,
continuing its reasoning loop, but now
with our approval or feedback guiding
the next step. Let's check out the React
application and see how it looks like in
real life. So in this basic application,
I have a chat interface that allows me
to trigger different agents using the
simple text area field. On the left
side, you find different agent scenarios
that help me to visualize agents
behavior. For the human in loop agent, I
can send specific customers emails and
ask them about their recent order.
Whenever I hit the send button, we're
going to hit a next.js endpoint, which
allows us to parse the body of the
request. And then we verify if we have a
user prompt as well as an API key. So we
can actually access a large language
model. Then all the magic happens in my
human in the loop agent function. That
agent function contains a user database
in form of an object. H and then defines
a model that we want to use for our
agent. A tool that helps us to get the
user email and a tool that helps us to
send that user an email. All we need to
do to create the agent is call the
create agent and define a model and
these tools. As you can see here, we
don't define any middleware or human in
the loop just yet. We just want to test
out how it works. So if I click on the
example prompt, you can see that the
agent decides to call the get user email
tool to get the user details and the
email of that user and then sends that
email to that user right away. Now we
want to verify and interrupt this agent
workflow to make sure that we revise the
email before it sends out. In order to
do that, we have to import the human in
the loop middleware from the longchain
package.
Then in our create agent call, we define
a middleware function in our middleware
property. And in that middleware
function, we say every time the action
is about to call the send email tool, we
interrupt and allow the user to revise
and accept, edit or reject the tool
call. Another important point to make
this workflow happening is that we have
to introduce a checkpoint to our agent.
A checkpointer helps us to store the
state of the agent at any given point in
time. So this checkpointer for this
application comes from a utils module
that essentially defines a checkpointer
using a reddis database. So every time
the agent calls a tool, it stores its
state into that radius database. And so
whenever we approve the call and make a
second request, uh the agent going to
continues where it left off.
Now after adding the middleware you will
see that our state is depending on
whether or not we sent an interrupt
response or not. Without an interrupt
response we just sent a basic human
message to the agent which triggers the
agent workflow. With an interrupt
response we call a command instance that
resumes the agent workflow at the given
point where it was left off. And all we
do at the end is just stream the results
to the front end. So let's test this out
with the new workflow. You will see that
the agent is still calling the get user
email uh without asking for our
permissions. But now for the sent email,
we see that uh an interrupt appeared and
the agent now asked us for approval. So
we can now reject, edit or approve that
workflow. Let's reject it. Let's say
send the email
but speak like a pirate.
confirm the reject. So the tool call
failed. We gave the agent some
information how it should change that
tool call. And now in the second
attempt, we now see that that Sarah is
greeted properly like a pirate. Aaro is
there. And I like this interaction with
the customers. So let's approve that.
And this tool call now passes and the
emails being sent. So that's how you
integrate human and loop directly into a
longchain agent running inside a Nex.js
application. Human in the loop is a
powerful concept because it gives you
the best of both worlds. Agents that can
think and act autonomously but with
human oversight where it matters the
most. It is especially useful and
valuable for actions that carry risk or
require judgment like sending an email,
updating records or writing to external
systems where you want the model to
propose an action but let the person
make the final call.
What makes the middleware so dynamic is
that you can decide exactly when and how
you interrupt the agent. You can base it
based on the tool name, the arguments
it's using, or even runtime context like
the user role or data sensitivity. That
flexibility means that you can scale
from a simple approver flow to a full
customized review system.
If you want to try it out yourself, I've
published the full example in a
repository that you can clone today.
Head over to github.com/christian
broman/lunghat
and spin it up locally and experiment
with adding your own tools and approval
logic. Thanks for watching and have fun
building safe human aware agents with
Ling.
Bring humans back into the loop 👩💻 — this tutorial shows how to integrate Human-in-the-Loop (HITL) middleware into your LangChainJS agents using createAgent. You’ll learn how to: - Pause agent execution for human approval or correction - Review and edit tool calls or model outputs before continuing - Build safer, more controllable AI workflows in your apps Perfect for customer-facing chatbots, internal tools, or any system where humans should stay in control. 📘 Example code: https://github.com/christian-bromann/langchat 💬 Learn more: https://docs.langchain.com #LangChain #AI #Agents #HumanInTheLoop #JavaScript #Nextjs #AIengineering