Loading video player...
Hey folks, my name is Sydney. I'm an
open source engineer here at Langchain
and I'm super excited to introduce our
new human in the loop middleware. So
first let's cover the core agent loop.
An agent is basically a model calling
tools in a loop until it decides to
provide a final response.
We modify this loop slightly in order to
add human in the loop. So human in the
loop middleware is super useful whenever
you want to get human feedback before
executing your tools. This is
specifically useful for tools that are
either expensive or risky to run. So you
can see in the diagram here a model
calls tools and before we actually run
those we conditionally run that through
our human in the loop middleware node.
The middleware provides a couple of
different ways that a user can respond
to a human in the loop interrupt. So the
three most common ways are approval,
edits, and rejections. So for example,
if we're using an email case where an
agent requests user input before we
actually send an email, approval would
mean we send the email draft exactly as
written. Change the recipient or the
body of the email. And then a rejection
would be rejecting the draft fully and
perhaps providing a message back to the
model explaining how to rewrite the
draft. Let's jump into some code.
Okay, so here we see our send email
example. We've written a basic send
email tool with a recipient, subject,
and body. And then we're creating an
agent with that tool. Uh has a send
email tool. We're using GT40 here. and
then a simple system prompt indicating
that it's a helpful email assistant for
me, Sydney, that can send emails.
Let's simulate responding to this
incoming email from Alice. We're going
to jump over to Linksmith Studio here,
which is a great way to kind of
interactively
um test out your agents. So, we're
simulating
sending this human message. We'll see
that goes to the model which then calls
the send email tool and then see that
email is successfully sent to alice
atampample.com saying I'd love to grab
coffee with you next week.
Now let's go back and add our human in
the loop middleware in just two lines of
code. So the first change we want to
make is importing our human in the loop
middleware. So from link chain aents.m
middleware import human in the loop
middleware. And then here we'll add
middleware equals human in the loop
middleware. And then we're interrupting
on the send email tool uh with this true
boolean. I'll also note we do have more
configuration available for human and
the loop middleware. So you can specify
specifically a list of allowed
decisions. That's those except edit
reject types. Um but for this case we
can just use that simple boolean.
Now, let's go back to Studio and
experiment with responding to incoming
emails.
So, we're back in Studio. We see our
graph has been updated here. I'll start
a new thread and let's simulate
responding to that same kind of low
stakes email from Alice.
So, again, we see the streaming of the
tool call uh but this time we actually
get an interrupt raised in the human in
the loop middleware. In this case,
again, pretty low stakes. Think this
draft looks great. So, we can go ahead
and respond with a simple approval
decision. So, we go ahead and resume.
And then we see the email is
successfully sent to us. Okay. Now,
let's simulate responding to a slightly
more consequential email. In this case,
we're responding to partnerstartup.com
asking me to sign off on the $1 million
engineering budget for Q1. So, we'll
simulate the agent.
Again, we see that action request. In
this case, the draft indicates that
we've reviewed and approved the
proposal. I think we probably want to
exhibit a little bit more restraint
here. So, let's simulate our edit
response to a human in the loop request.
So, I've pasted another decision body
here. This time, it has type edit and
then an edited action that follows the
same structure as that email tool call.
So the name is the name of the tool send
email and then the args are the
arguments that we saw in that tool call
signature. So I'm sending it to the same
recipient
and a brief body and when we resume with
edits or a typeedit decision we should
see the tool still executed just
directly with these edited arcs. So the
next step here should be the tool node.
So we resume
we see the tool node executes. email
successfully sent and an edited body.
For our last case, the reject case,
let's again respond to this more
consequential email.
So, we see a draft saying that we've
approved it. Instead, we actually want
to feed a message back to the model
asking it to revise the draft and say
that we'd like more detail on the
proposal before approval. Here's what
our new response body looks like with a
decision type of reject and then that
message.
So, looks like the model has now called
the send email tool again with a revised
email body asking for a bit more detail
on proposal as we asked. Again, remember
we still have that human in the loop
node there. So, we'll need to approve
the final email before it's sent.
All right, our email's been successfully
sent.
Thanks for joining me for a brief
overview of Lingchain's new human in the
loop middleware with the use case of
human feedback on an email assistant.
Learn about how to use LangChain's human in the loop middleware to approve, edit, and reject tool calls before they're executed. Our example uses an email assistant agent that requires human feedback before sending sensitive emails. Middleware docs: https://docs.langchain.com/oss/python/langchain/middleware#human-in-the-loop Code: https://gist.github.com/sydney-runkle/628246dc4f851dda45f57b492c645ec0