Loading video player...
And so you have this iterative process
of creating the right prompt and shaping
the right guard rails and other code in
order to get it to be like really useful
in those situations.
Open source has been a huge part of lang
chain from ever since we got started.
Obviously it started as an open source
package and it's evolved a lot over the
years. We now have Typescript packages.
We now have lang chain and langraph. And
so you know as we release 1.0 know of
these packages. It's a huge moment for
us as a company. And so really excited
to be here today uh talking with a few
core members about what what these
releases are and and why they're
important. Uh my name is Harrison,
co-founder CEO of Langchain and uh
started off doing a lot of open source
work, but now a lot of that's done by
the folks here at this table. So do you
guys want to do a quick intros of
yourselves?
>> Yeah, sounds great. Um my name is
Sydney. I'm an open source engineer here
at Langchain and I work on Langchain and
Langraph Python.
Yeah, my name is Hunter. I'm uh also an
open source engineer here. Uh I work on
the TypeScript versions of lang chain
and lingraph.
>> And I'm will finding engineer here and I
work a lot on langraph python and its
internals.
>> Awesome. So I'll I'll maybe start with
some brief context on on how we thought
about lang chain at the start and then
you know we can evolve into langraph and
all the stuff that's coming now down the
down the pipe with 1.0. So, so when
Langchain launched uh nearly three years
ago, there was really two core parts of
it. One was a set of integrations. At
the time, it was actually a little bit
limited. It was, you know, OpenAI,
Coher, hugging face. I think those were
the models, but like there were these
components and integrations for these
core building blocks. And so models we
talked about, that's one of them. There
was also vector stores, there was
document loaders, there was all these
components. And then the other thing was
these these high-level interfaces um
that made it really easy to get started.
And so this would be like rag five lines
of code, SQL, five lines of code. And
and this is where the industry was.
Again, this was three years ago. This
was a month before Chad GBT. No one
there was a few people building this
stuff, but not nearly the the same
amount that that are today. And so a lot
of a lot of what we built and a lot of
what we focused on at the time was
making it as easy as possible to get
started. And that focus and priority has
shifted over the years as as we've seen
the industry mature and people want to
go from prototype to production. And I
think there's there's probably no better
example of that than when we launched
Langraph. It was it was about a year and
a half after we launched uh Langchain.
It built off of a lot of the learnings
that we had. And Will, you were kind of
like a core part of that. And so do you
maybe want to talk about why we launched
Langraph and and what Langraph is and
how how we thought about it.
>> Yeah, sure thing. Um when we were
building Langraph initially, we kept two
aspects in mind. One of them was that
aspect of controllability that you were
touching on there where with slang chain
we made it really easy to get started.
We had lots of ways to use it but people
as they transitioned into production
wanted a lot more ability to customize
it and we wanted langraph to make it as
easy as writing regular software
>> and maybe like expanding on that like I
mentioned like rag and five lines of
code like that's obviously making a ton
of assumptions about what's under the
hood. So like you know there was there
was hidden prompts and there there was
uh you we now have this term called
context engineering but there was this
hidden context engineering that was that
was getting done and again that has pros
and cons made it really easy to get
started but then as you wanted to
customize it and as you wanted to to to
modify it um you you needed lower level
controllability and maybe it's worth
touching on like why is this
controllability so important? Yeah, I
think there's really a gap between model
capabilities and the reliability
whenever you scale it to production um
along a number of axes like one being
just the number of users trying to
interact with it in so many ways. You
run into these cases you hadn't
anticipated it before and so you have
this iterative process of creating the
right prompt and shaping the right guard
rails and other code in order to get it
to be like really useful in those
situations. The other access I'd say is
like the longrunning agents and so as
they get more useful, people are trying
to do a lot more with those. Um, but
then you open up a lot of doors for them
to go off the rails a little bit. So you
want a little bit more ability to
control and they have these like
interfaces and interaction patterns um
that enable people to get a lot of
useful work done.
>> Yeah. And and and so the the the
interfaces and interaction patterns and
production and long running that kind of
gets the second point of what we you
know focused on a lot in Langraph which
is the runtime. Do you want to talk
about that?
>> Yeah, totally. Um so we allowed the
control but then within that there's all
these utilities that we think are
generally useful and so when thinking
about the runtime there are core some
core central um principles of it with
things like uh a durable execution
environment so that as agents are
working for long periods of time if
there's an error we don't have to cancel
the entire run. Um and then if there is
an error it can be recovering from
checkpoints because we're caching that
application state as it goes over time.
um as these agents are working for long
periods of time, users might be
impatient and want to see what's
happening and interact with it. And so
adding streaming as this native first
class citizen within the framework was
really important and useful for all
these different type of apps. Um so the
the runtime really uh was something that
we had been able to anticipate through
our interactions with the community u
and powers all these interesting new
features and these experiences we see
people doing.
>> Yeah, I I I remember like uh you know
streaming and human in the loop. We we
we didn't know this when we started lang
chain um because you know lang chain was
was very basic and and and we kind of
had to retrofit and add those in and um
and when we were doing langraph we kind
of paused a little bit and we're like
okay what are all these things that are
like you can do you can do like like
human loop's a great example you can do
that super easily prototyping because
you just put like uh input command in
python and and but that doesn't scale to
production obviously and so as as we
were building the uh lang graph we were
thinking about okay we see that human
loop is important it's easy to do in
prototyping, hard to do in production.
How can we architect kind of like a
runtime uh that that enables this? And
so I think that was a that was a big
part of the of kind of like the purpose
of Langraph. And I think that's a big
part of why we also chose to rewrite
Langchain on top of Langraph. So maybe
kind of like pivoting to that like I
think one of the key parts of Langchain
1.0 is that it's built on top of
Langraph. Uh but it also kind of like
marries the ease of getting started that
was in lingchain from the start because
one of the things we did hear about
langraph was you got all this
controllability but hard to get started.
So how do we think about combining those
things in lang chain 1.0?
>> Yeah. So in lang chain 1.0 we wanted to
be the easiest place to get started with
generative AI but specifically like
building agents. And so we've seen a lot
of successful use cases from users and
customers built on that lingraph
runtime. um you know with human in the
loop and persistence and durable
execution uh we wanted to bring those
runtime features to lang chain um but
also as you mentioned uh you know it's
it's a little bit hard to get started
with lang graph if you aren't used to
kind of nodes and edges and workflow
primitives and in our create agent
abstraction in lang chain it's as simple
as a few lines of code to get started
with that classic like model tool
calling loop
>> okay yeah so you mentioned create agent
abstraction like expand on that. What is
that?
>> Yeah. So, kind of the central pillar of
our new lang chain 1.0 release is this
create agent abstraction. It's new in a
sense, but also it's actually quite
battle tested. So, um users of langraph
are familiar with create react agent
which is a uh you know react agent
abstraction built on langraph and then
before that actually came the like chat
agent exe executive for our like og lang
chain f.
>> Yeah. I think lang chain like the the
0.0.8 version had that. Yeah. Yeah. Um,
so it's new in the sense that it's uh
significantly improved, but it's also
like the same core pattern that we've
seen used and tested and um, valued.
>> Yeah. And I think I think like you know
when we were doing ling chain early,
there was all these different chains and
all these different agents because there
were all these different patterns. It
was early and and and and I think over
time we've seen that like a lot of the
longtail stuff is just easier to do in
Langraph from the start. But then there
is this core pattern and I think most
people have kind of converged on this
idea of an agent as an LLM running in a
loop calling tools. Um and there and so
there's this core pattern that we want
to really like centralize around. But at
the same time like the the reason that
not everything is this core pattern is
because there are modifications that you
want to make to to the loop. And that
and that's constantly been kind of like
one of the criticisms of of lang chain
is it like gets in the way of this
controllability. I mean that's why we
built kind of like langraph. Um, and so
one of the things that I'm really
excited about and would love to hear you
talk about is middleware. Um, so what is
middleware and how does it solve this
problem?
>> Yeah, we're really excited about
middleware. So middleware allows you as
a developer to add in additional logic
at any point in that core agent loop.
And so that is what makes this core loop
extensible to a variety of different
applications. Um, so I'll give a couple
of examples. I think that'll help to
kind of illustrate value here. So before
your model call, you might want to
summarize past conversation history. Um,
you know, we we talk a lot about
longunning agents, right? That means
tool calling loops that get super long.
And so message history can be quite long
and in order to have an effective
conversation with the LLM, you need to
summarize things. So that's one example
of a pre-built middleware that we offer.
Um, another example is our human in the
loop middleware. Um, so if you have
risky tool calls, um, or expensive tool
calls, you might want human approval or
edits on those tool calls before they're
executed. Um, and so that's, you know, a
hook that would go after the model call.
But we also really believe in this
pattern of kind of hooks around the
agent loop. And so we make it really
easy to build your own middleware as
well. Um, so if you're looking to
customize a dynamic prompt or dynamic
tools, um, you can do that through
middleware as well. Uh, we think this is
what differentiates lane chain as an
agent framework. The amount of
customizability that is baked into
middleware.
>> Yeah, Hunter, will you guys got any
favorite middleware? Yeah, one of the
ones I'm super excited about is the
dynamic model middleware, which I mean
the basic like oneliner for that is is
based on context I can dynamically pick
which model I'm using uh when I'm going
to make a new message or the next point
in the loop. Um I I think that's super
interesting just because like I don't
think it's too controversial to say that
uh there isn't like a champion model
anymore. Like I think if you roll back
the tape like a year ago, it seemed like
every other month it was a new model
that everybody was using and everybody
was parading around.
>> Yeah. Constantly changes. They're good
at different things.
>> Yeah.
>> Yeah. Yeah. I don't think that's super
much the case anymore. Like I think it's
uh more so like I don't think it's too
like crazy to say that like you would go
to anthropic for like coding tasks or
you would go to OpenAI for the reasoning
things or Google for multimodal. Like
we've kind of seen the like in a weird
way like these specialties come out for
each of these models. And I think as
we're talking to builders like that
optionality between models was one the
original charter for lang chain. Um, so
like being able to enable that like
that's how you stay on the the bleeding
edge of agent building.
>> And now this this the dynamic model
middleware will let you dynamically
switch that based on based on user
context which is pretty cool. I mean
let's let's stay on models for a little
bit. So like as you mentioned those were
one of the core parts of lang chain from
the start. Uh we're also doing some big
enhancements to them in 1.0. Uh do you
want to talk about content blocks and
what those are and why people should
care? I think uh content blocks what
they are is effectively like our
standard representation for the for the
content of a message which
>> and maybe just to interject here a
little bit like messages first of all
those didn't exist when lang chain
started so like lang chain used to be
like string in string out and then
evolved into like a list of messages in
message out and and those messages had
this content field and it used to be a
string and now you're saying it's
something more complicated and we're
standardizing it.
>> Yeah. Like I think at the beginning of
the year um you know reasoning models
started becoming the big new thing.
Multimodal made a lot of advancements
and the way that like these model
providers express that was uh by adding
what they call like content blocks
inside of a message. Um but you know as
those capabilities have expanded uh
every single one every provider has
their own opinion on what those content
blocks should be like.
>> Yeah. And not like it's different
formats but also like they just have
different server side tools. So like
they they can't be the same cuz they
have different things which makes it
really annoying. Yeah, exactly. And like
when we're talking with builders, like
that's a core complaint that we hear is
um when I'm building an application
around it and I don't know what the
shape of messages is going to be like,
you know, just cuz like I might change a
model out and everything changes. Like
that's a really bad state to be in. I
remember running into this like right as
you guys were working on it, which was I
I was switching agent I had from OpenAI
to Anthropic and the the format of the
messages that came back as they were
streaming with tool calls was just
different and it broke all my streaming
code. So that was that was not fun at
all.
>> Um maybe kind of like talking about
agents a little bit like when we
launched Langraph it pretty quickly
became one of the way the way that we
recommended folks to build agents
because of the controllability and the
runtime that's built in. Um, as we
mentioned there there are some
downsides. It's harder to get started
with and and so now we've got Langchain.
It's built on top of Langraph. So it's
got that production ready runtime. It's
easier to get started with. It's got
this like agent class.
When should people use Langchain versus
Langraph? I would recommend people get
started with Langchain after uh this new
rewrite. We've made it a lot easier to
get started. Um, and we've raised the
ceiling on what you can do with the
create agent primitive. So both the
floor is lowered and the ceiling has
been raised. I think when you start to
hit the ceiling is when you want
extremely custom workflows. Um and what
I mean by that is if you want to design
workflows that have deterministic
components and agentic components then
langraph might be the right place for
you. Additionally one nice thing about
uh langraph is that things are super
composable. So if you want deterministic
steps and agentic steps, you can use
create agent from lane chain to build uh
really functional useful agents and then
plug those into your existing workflows
as well.
>> Yeah, I think you said a word there that
I like workflows. I think like like
yeah, the lang chain agent is very much
an agent. Um it's autonomous. It runs
like there are places where you want
determinism and steps and workflows and
you can get a little bit of that with
middleware now. But if you just like
explicitly want that, then Lang graph's
great for that. And Langraph's great for
anything on the spectrum. So like as you
mentioned, you can take these agents and
use them as a step in the workflow. And
so then like if you've got a workflow
made up of agents, is that an agent? Is
that a workflow? It's somewhere in the
middle. And I think that's actually
where Langraph really shines. Any other
things that you would recommend for
folks to look into when deciding where
to get started? Yeah, I think for like
the little bit more of the experience
builders, uh, you know, there there's
been a ton of frameworks that have
popped up that, uh, is sort of like
centralized around this idea of agent
building. And I think the industry at
large has sort of centered on the idea
that an agent is a pretty basic tool
calling loop. So when you use uh, uh,
lane chain and you want to get more
familiar with like sort of these more
advanced context engineering or agent
building topics, um, like the thing
that's most conceptually transferable is
uh, create agent. Um but you know as we
mentioned like there are cases where uh
maybe you want some more determinism,
maybe you want your agent to think a
little bit more complex. Uh in that case
that's when you sort of uh step into
Langraph and all that has to offer.
Cool. So so we're launching Langraph
1.0, Langchain 1.0. As we think about
these releases and what we can do with
them and what come next, what are you
guys most excited about? I'm really
excited to see what the community comes
up with in terms of custom middleware.
Uh we've worked pretty hard on a couple
of pre-built ones. I mentioned
summarization. We have support for
dynamic model tools prompt kind of all
the classics um and human in the loop.
But really middleware is incredibly
extensible and it's composable too. You
can have a simple agent loop with lots
of different middleware. Um, and so I
think, uh, yeah, very excited to see
what community members come up with and
what kind of steps they think are really
important for Agentic Flows. Um, and I'm
sure that will kind of have a ripple
effect on the already really large link
chain ecosystem.
>> Yeah, building off of that, I'm excited
for both ends of the spectrum. I'm
excited for people who are like serious
developer focusing bare metal trying to
push the envelope on the longest running
agents doing useful work as possible.
and all the stuff that we're providing I
think makes it easier to go further with
a more reliable runtime. But on the
other end of the spectrum and something
that I've always loved about Langchain
is that we're like some a lot of
people's first introduction to AI and
even to just programming in general in
Python. And I think that the agent
abstraction is so like accessible to
someone like me and and to other people
while also being very powerful um that I
want to see all the ways that people can
compose these highle building blocks
into like really useful work as well
without having to go and and like think
too hard about all the different you
know abstractions within it.
>> Yeah, I think kind of building on that
like the term context engineering is
something that's you know been
familiarized in like the last few
months. Uh this is like with the create
agent and the middleware API like I
think that is the first like real
abstraction that I've seen where rubber
meets the road so to speak in terms of
like you can actually apply context
engineering. Um like I think this is the
first real practical case of being able
to do context engineering reliably.
>> Yeah. I mean I think the you know the
the idea of this like core loop that's
running an agent is is is clearly common
but it enforces context engineering and
I think you need outlets and escape
hatches and things like that. And so I I
I agree that middleware is a great way
to do that. One thing that uh we were
talking about earlier, Hunter, that that
I think is is probably uh super
interesting to look into is is uh all
the UIUX's around these agents and
starting to we're doing a little bit on
kind of like, you know, some front-end
hooks for what that looks like.
Streaming obviously key component of
kind of like langraph and now and now,
you know, powers lang chain as well. And
and and so I think there's just a ton to
be discovered there. Um and and we're
investing a lot more on the JavaScript
side of things as well. And I think that
allows to create a lot of these full
stack agents all in JavaScript front
end, back end, figure out the
interaction patterns and really explore
that. So yeah, lots of lots of really
exciting things in in the 1.0 releases.
Lots of good things to try out. Thank
you all for walking folks through them.
Highly encourage everyone to, you know,
pip install or mpm install the the
latest versions and and give us any
feedback as it as it comes up. This is a
big step for for for Lang Chain and
we're really excited about it.
LangChain CEO Harrison Chase sits down with open source engineers Sydney, Hunter, and Will for an in-depth technical discussion on the major 1.0 releases of LangChain and LangGraph. The team explores the evolution to production-ready agent frameworks, including the new create_agent abstraction, middleware system, and why controllability matters for building reliable AI applications. 00:00 - Introductions and team overview 01:15 - The origins of LangChain 03:00 - Why we built LangGraph 05:30 - What's in an agent runtime 08:45 - Rewriting LangChain on top of LangGraph for 1.0 10:20 - The create_agent abstraction in LangChain 1.0 12:00 - Middleware in LangChain 1.0 14:30 - Pre-built middleware for summarization, human-in-the-loop 16:15 - The end of the "champion model" era 18:00 - Content blocks for standardizing model outputs 20:30 - When to use LangChain vs LangGraph (agents vs workflows) 23:00 - Context engineering and composability with middleware 25:15 - What's next for LangChain 27:30 - Closing thoughts and getting started with 1.0 Check out the docs: https://bit.ly/42MylJi Learn more about LangChain & LangGraph 1.0 in the blog post: https://bit.ly/4qkNhs7