Loading video player...
Hey folks, it's Sydney from LinkChain
and I'm super excited to share with you
our next middleware demo for our to-do
list middleware. Did you know you're 42%
more likely to achieve a goal if you
write it down? Turns out agents actually
benefit from the same agents equipped
with a to-do list often perform better
when given complex tasks. In fact, you
might have already seen this in action
with coding agents like Claude Code that
draft a to-do list and continuously
update it throughout a conversation.
First, let's talk a little bit about the
anatomy of a to-do. A to-do item has
content, that's the task to be done, and
then a status, which can be one of
pending, in progress, or completed.
Why are to-do lists helpful for agents?
Well, first of all, to-do lists help
agents break down complex tasks into
actionable steps. Those actionable steps
often correspond to tool calls.
To-do lists also help provide increased
engagement with the end user. The user
can get insight into agent behavior and
next steps, and this provides
opportunities for potential interrupts
or human guidance and rerouting in the
agent life cycle.
It's nice for an end user to see
progress visibility. It gives this
illusion of lower latency. Um, and
agents can also adapt and change their
to-do lists as new real-time info comes
in.
Let's take a quick look at the code
we're going to use for this demo. So, we
are building a software development
assistant agent uh that has access to
two tools, a create file tool and a run
command tool. Additionally, we pass it
uh this middleware arg with the to-do
list middleware which behind the scenes
passes it access to a third tool which
is the write to-dos tool. Um, and we'll
see that in action in studio. When we
add this to-do list middleware, we also
under the hood influence the system
prompt that's passed to the model. So,
not only is it informed that it's a
software development assistant, we'll
also tell it a little bit more about the
write to-dos tool and how it can use it.
All right, so now we're in studio. We
can see our classic model and tool
calling loop that builds our fundamental
agent. And let's kickstart our software
development agent with a simple prompt
asking it to create hello and goodbye
scripts, make them executable, and
verify that they exist. So I'll kick it
off here.
Great. We can see that the agent has
completed and the final message
indicates that the scripts have been
created. We can see the final call to
the write to-do tool resulted in a list
of all of these to-dos with status of
complete. Let's take a look at the trace
view for some more insight.
All right, so this view shows us all of
the iterations between model and tool
calls. We can see the model has
alternated calling tool like the write
to-dos tool, the create file tool, and
also the run command tool.
If we look at the final response from
the model, we can get additional
information. So we see those three tools
are available.
We can also see the system prompt which
first has the software development
assistant information and then also has
this added information about how to use
the write-dos tool.
Finally, if we look at that final
response, we see the successful report
that the scripts were created and are
now executable.
This has been a brief overview of the
lang chain to-do list flare. Thanks.
Learn about how to use LangChain's to-do list middleware to equip agents with task planning and tracking capabilities for complex multi-step tasks. Our example uses a software development agent. Middleware docs: https://docs.langchain.com/oss/python/langchain/middleware/built-in#to-do-list Code: https://gist.github.com/sydney-runkle/09e591651bd254e0384ebf114f31626a