Loading video player...
Hi everyone, my name is Marlene and
welcome back to another episode of uh
Python on Azure with Marlene and Gwen.
In today's episode, we have Sydney
joining us. Sydney is from Lang Chain
and one of the lead maintainers for
Langraphth. Uh super excited to have you
here with us today. Sydney, do you tell
us a little bit about yourself? I would
love to.
>> Yeah, definitely. I'm super excited to
be joining you guys. Um I my name is
Sydney. I work on open source at
Lingchain specifically on our Python
open source tools. Uh so as Marlene
mentioned both Langchain and Ling graph
um Langchain itself is a company kind of
built on the foundation of those open
source tools. Um and we offer kind of
other observability and deployment um
options for um for AI applications. Um
but very excited to be kind of chatting
about those open source foundations and
um what you can do with them.
>> Awesome. Yeah, we do a bunch of work
with lang chain. Microsoft has a bunch
of integrations in lang chain and love
everything that you're working on and
know we have some exciting things to
talk about. So we would love to just
before we get started as well just hear
from you. How did you get into open
source? So I think when I talked to you
in the past, you've worked on different
open source projects. Could you tell us
like your journey with open source?
>> Yeah. So um my journey with open source
actually started really early. Um as
many I think basically every Python
developer uses open source tooling,
right? Like if you've ever used pandas
or um requests or anything like that,
right? like um all of these uh tools are
public um and maintained by community
members um and dedicated developers. Um
and so in high school I got started
writing Python code um and started using
Streamlit which is a cool like
visualization or like you know app
building tool. Um and that was like a
great way to get started uh really
quickly with building UIs in Python. Um
and then through my first couple of like
internships got started with other open
source tools. Um notably Pyantic was one
that I found a lot of interest in.
Pyantic does data validation for Python
um and kind of type enforcement. Um and
towards the end of my college uh or time
in college I reached out to uh Samuel
Kovven who is the um author of Pyantic
and asked if Pyantic the company was
interested in having an intern. Um, and
then I got to start working for them uh
while I was in college. So, that's
really cool.
>> That's I just want to shout out there
and say that's amazing. But I didn't
know that you reached out to them.
>> Yeah. Yeah.
>> Oh my gosh, that's so cool.
>> Yeah. Can you can you talk a little just
a little more about how that went?
>> Yeah. Um, so I mean I have a I have a
blog post that I wrote about like the
value of cold emails
>> sending um you know being ambitious with
emails. I think my blog post actually
fact that I found my favorite tomato
soup recipe by sending a bunch of cold
emails. But anyways, in a like career
sense um yeah reached out to Samuel and
he was uh very kind and eager to like
give me a shot at um interning for
Pyantic. And so during my last year of
school, I basically did school stuff for
half of my time and then half my time
was um maintaining Piatic.
>> That's impressive. That there's like I
I'm sure a lot of people listening to
this Well, we we have a lot of people
who ask questions about like, you know,
it's tough market, like what can I do to
like, you know, land a role and stuff
like that, but you kind of just made it
happen for yourself. You like
>> Exactly.
Yeah, you just wait like, "All right,
I'm going to go get this. I'll let's
let's do it." Do you have like I know we
had this question at the end, but I feel
like it's relevant to ask here. Like if
someone wanted to have that approach,
what do you think they would need to
kind of have in on their cuz it's not
like I can just start programming Python
tomorrow and be like, "All right, I'm
going to go ask someone for a job."
Like, how do you feel like what what do
you feel people need in order to to be
able to make a move like like you did?
Yeah, I think one of the things that I
like cited in that initial email was
some open source contributions that I
had made. Um, so I think we hear a lot
like build up your your profile of
personal projects and things like that,
right? um which is great good way to
like get started and work on things
you're passionate about. But I think
like super early getting started with
like how can I help this open source
project is a great way to kind of
immerse yourself in more professional
code and like you know um CI systems and
kind of other things that you're more
likely to like actually see in a um job
as a developer later. Um, and so I think
having those really helped. And luckily,
a lot of open source repositories are
really good about welcoming new
contributors and kind of giving them
this like on-ramp for like, okay, so you
want to get started and you're like, you
know, maybe new to coding even, but like
here's how you can help us improve our
docs or like here's how, you know, help
us fix this small bug or things like
that.
>> That's awesome. I know we we'll have a
bunch more sort of related questions
that we probably can ask throughout, but
I know you brought some demos you want
to show us. I'm very excited. Merlene
has way more experience with lang chain
and langraph than I do. So, this is
going to be a very good learning
experience for myself. I'm very excited.
Do you want to share your screen? I can
bring it up here and then we can uh see
what you've got.
>> Yeah, definitely.
>> Merlin, how long ago did you start doing
lang stuff? You know, I think I don't
even know like really early because when
I before I was at Microsoft, I want to
say like three years ago before I was at
Microsoft, I started experimenting with
it. I was working at a startup called
Voltron data and I thought languin was
really interesting even back then in
just ter like it was one of the first
framework I think it was the first
framework in Python that actually like
made it easy to use lms
um with like data processing libraries
like pandas and so I want to say this
was like three years ago but like
langraph wasn't around when I first
started so it's going to be I'm
interested to see what's new what what's
coming up So yeah, what do we have on
the screen? Yeah.
>> Yeah. Yeah. Very excited to talk more
about this. So I guess I'll give a
little bit of context for, you know, we
we're throwing around the terms lang
chain and langraph. Um so folks for
folks that are a little bit less
familiar, yeah, lang chain started uh
three years ago almost now. It was
released about a a month before chatgpt
came out. Um, and it was a way to um
make it easy for Python developers to
interact with LLM. Um, and so what that
looks like has really evolved
significantly over the past 3 years. Um,
you know, even on like three-month
increments, right? We're seeing like the
ways that we interact with LMS change.
Um, but one of the kind of core pieces
uh that we've seen stay pretty
consistent um is this like agent uh or
this agentic pattern, what we call this
like agentic loop where you give a model
some tools and it like calls tools in a
loop and then it decides it's done at
some point. Um and so lingchain uh
initially set the foundations for like
you know adding tons of integrations
like um Azour integrations and uh open
AAI integrations and anthropic
integrations um and even things like uh
embeddings and like vector databases
just kind of like everything under the
sun that had to do with interacting with
LLMs lang chain tried to make easier for
developers. Um, but we started to see
that folks wanted more control um over
interactions with LLMs. And that's kind
of what Langraph was born out of is that
like if you want to build an application
that's going to go into production, you
probably need a little bit more control
over like each step. Um, and so what
we're looking at now is that like
classic um tool calling loop. You have
like a graph structure here which is
going to be really familiar for um
developers. uh or you could call it a
workflow even. Um so we see you know a
start node then this like agent or we'll
soon rename it model node that can call
tools and that kind of goes in a loop
and then when it's done calling tools
we'll jump to end. Um and specifically
this like agent demo that I have up um
is like email assistant agent. Um so I
can show a little demo of like what it
might look like to send an email with
this helper. Um,
>> let's do it.
>> Quick question.
>> Yeah.
>> You're you're inside of Langsmith and
for those who don't know what Langmith
is, do you want to give like a quick
just one line or something?
>> Yeah. So um Langmith is our
observability um platform for agentic
applications or anything built with
LLMs. Um so helps with tracing your LLM
calls and uh workflows and then also
doing like evaluations to help your um
models perform better and you can kind
of like score how models are doing. Um
you can also do like prompt engineering.
So you can kind of see all that on the
side panel here. But the tab that we're
in is called studio um which is really
great for like demos and like stepping
through and debugging things. It helps
you kind of visualize your graph and
like step through and see what's going
on.
>> Awesome. Um but yeah, so this is um like
Sydney's basic email agent. Um I'm going
to add a message here that says uh send
an email to Sydney and ask if we can
catch up over a coffee on Friday. I'm
quite the coffee addict. Um so I'll hit
submit. Yeah.
>> Um we can see the agent is uh so we send
this human message. Um the agent makes a
tool call to the send email tool. Um and
we can see that it intuited my request
pretty well, right? So the two email is
Sydney at langchain.dev and then um it
the content of the email is asking if I
want to get some coffee. Um, and then
the tool successfully runs and the agent
comes back with this response that says,
"Great, we sent an email to a Sydney."
>> Amazing.
This is super cool to be able to see it
like in this interface as well. Like, so
for here, is this Python code running in
the background to be able to build this
out? Like, what would uh Yeah. How do
you get from like the the background
stuff with the background Python code to
what you see on the screen here or do
you define it here
>> for the Yeah, great question. I will
switch over to my um IDE.
>> So, it's really cool that uh all that
I'm powering this application with is
these like nine lines of code, right?
So, I will say
>> a a dummy email tool, right? We're not
actually calling Gmail API right now.
>> Um but
>> basically in three lines of code uh or
in in two lines of code even we're
calling the create agent function um
which is in lang chain but it uses lang
graph under the hood um and then giving
our agent some tools and a model. I
could also give it a prompt that says
like, uh, you know, hey, you're Sydney's
personal email assistant. Like, make
sure to be kind and, um, not too
verbose, but like genuine and, you know,
kind of give it more personal direction.
Um,
>> oh, this is awesome. Yeah, this is one
thing that I really love about Langchain
is that it just makes the whole process
of building agents really easy and you
can, you know, you can see even adding
tools is is super straightforward. So, I
think this is really nice. I didn't know
that we had the tool decorator that uh
you could add on. Well, I think maybe
I've seen it, but I usually would you
uh is this something people can use now
or has this always been there and I just
have not been using it?
>> Yeah, the tool decorator has been there
for a while. Um, but definitely
something I would recommend folks using.
It's nice because you can add kind of
custom configuration to your tools. So
we can see here that uh you can add like
a description whether or not you want
that tool to like return directly to the
kind of or whether you want to like jump
to the end of the agent loop when it's
called. Um and then you know like do you
want to parse the dock string to get
information about the tool arguments and
things like that. Um I think we see one
of the like very common failure modes
with folks is they like write tools that
um do great things and then there's not
good enough documentation attached that
the like model can infer to call them
appropriately. So
>> exactly trying to make that easier with
the the decorator here.
>> That's a good tip. So everyone keep that
in mind. Make sure it's not only good
code and functionality but it's also
documented.
>> Yes. Yes. 100%. And I think we'll we'll
touch on this a little bit more later,
but um LLMs are pretty good at helping
improve documentation, which is cool. Um
so it's sort of this like
>> self fulfilling cycle.
>> This is also something I wanted to ask
you as well in terms of so you help to
maintain Langraph and Langraph is open
source. Sain in general is open source
which is amazing. But what would you say
h how often have you been using
um LMS in your process for maintaining
the code? Do you use LLMs a lot? Like
what does that look like for you?
>> Yeah, great question. I think this is
another one of those things that is
really changing rapidly for developers,
right? Um I personally use uh Loom, I
would say a moderate amount compared to
at least other developers around me. Um,
I think they can be really helpful for
kind of discussing um, like API ideas
um, and things like that, like almost a
a companion to bounce like ideas off of
and talk about like pros and cons of
different design approaches and things
like that. Um, I think also uh, they can
be great for like autocomplete and test
assistance. Um, so like I've written you
know a couple of robust test cases on my
own and then I'd like to kind of extend
extend those um or you know even I'm
like using it for maybe more strategic
find and replace right like I want to
make sure I'm also updating like doc
strings or things like that. Um, where
I'm not using them like
quite as much as other developers around
me yet is like for uh full refactors or
um fixes or features. Not to say that
they can't be useful, but I think um if
you're adding something that's really
significant to a codebase, it's good to
like still have pretty thorough review
there. Um, so it it's definitely a
balancing act and I think they're the
like coding agents are getting a lot
better very quickly. Um, but yeah, still
trying to like trying to tow the line of
like when do we need really like human
review here and when is that valuable?
What about you guys? What what do you
find yourself doing?
>> I
do a mix of things like I like I still
like using a lot of like the code
completion like as you're just typing
and then like tab like I find a good mix
to leverage that a good mix as well. And
then I'm definitely going between
because in GitHub co-pilot, shameless
plug, we have Ask mode, we have agent
mode, right? I like starting a lot in
ask, just really understanding what I'm
trying to build, really clarifying all
the like things I'm trying to
accomplish, and then once I have like a
great idea, I'll probably type up a prop
and just send it off to agent mode. And
then kind of like I I'll try to do it in
stages where let's say I want to build
like four different things or four
different functionalities. I'll get like
the first one done and then I'll pause.
I'll look at it. I'll make sure it's
head in the right direction and then
keep going going. That's that's like I
think I'm the majority of my stuff is is
that way. I like to think of it as like
different types of tools like the code
completion is like fine details like for
bigger things. I'm sure there's like a
there's like a analogy that I could come
up with now if I was a creative person,
but um yeah, I just like to think of it
like what specifically I'm trying to
accomplish and I'm sure I could could
figure out which which tool is best for
for the use case. But I'm a I'm a I'm a
big fan of like all the obviously,
right? I don't think I'd be at the shop
if I wasn't a big fan of these tools.
But yeah, Merlin, how about you?
>> The ask code is really great for like
especially when you're onboarding or
like learning a new codebase. Yeah,
>> context window length and stuff. They do
really well at helping explain things.
>> Someone the other day they have like the
part of their onboarding docs they have
a prompt workbook. I think that's what
they're calling it. And it's like on
this file ask these questions on this
file ask these questions and it sort of
like allows the the new person on the
team to it's not like oh you have to
read all of this like you can kind of go
and experience the docs and inform
yourselves in ways that work for you. I
think that's pretty neat. I think
there's and there's a lot of stuff
that's coming out like that.
>> Yeah, I agree. I think the OS mode is
super helpful um for onboarding for
sure. I think for me it's it's
definitely a mix of both of what you you
shared both of you shared. So for me is
like creating samples 100%. I'm using
agent mode. I'm using like um I think
what Gwen was saying there in terms of
like being a bit more modular with it.
So building out different parts. So
maybe starting out with a certain part
of the code and asking um copilot to
generate some some code there and then
like building it up as I understand or
edit myself. But that would be mainly
for samples. But definitely I feel
scared
I don't feel scared to be using it to
like fully refactor big code bases. So
like right now I'm maint helping
maintain the lang chain Azure
integrations and uh and I I work with
other people that do it but I feel like
that's something I'm a bit more hesitant
to for sure do like a full refactor on
because the code is like it's a problem
if if there's a hallucination or
something it's it would be a pretty big
deal. But I do use it for like uh the
docs and stuff like creating examples or
notebooks and I do want to use it more
for documentation like maybe
autogenerating docs. So yeah, I'm kind
of in the middle there I would say for
sure.
>> Yeah.
>> Yeah. I think like one really
interesting thing that we're like all
getting at here especially with the like
modular development in agent mode is
that like human in the loop um feedback
is really
>> crucial. Yeah.
Don't make or break it. Like you have to
you have to know what you're like.
That's why all right we might be going
on a rant and a tangent here, but like
that's my issue with like the vibe
coding is like
>> I don't see how you could end up with
something that is actually good if
you're not heavily involved in the
process. Like I don't see we we chatted
with Armen um Ron share like a month ago
and he said like
>> yeah you you can pretty much like just
go through everything tab tab tab tab
tab but and you can choose to like sort
of turn your brain off and go into idle
pilot mode or whatever but if you go and
see the logs look at which tools are
getting executed understanding like you
can learn a lot from that process there
might be things that you just you don't
know that you pick up from there and I I
have found that that's
>> pretty useful like I'm yeah maybe I'm
typing less,
>> but I'm definitely looking at the screen
saying, "Okay, wait, no, actually that's
not the direction I went ahead. Stop it.
Let's, you know, change it and like keep
going."
>> Yeah, definitely. I think one of the
most like um features that we've seen
asks for from developers is like how can
I build um human in the loop into my
applications like very seamlessly and
like make that kind of a first class
citizen. Uh and I can actually show kind
of the second part of our demo here
which is
>> let's do it. a segue
>> have
agent. I don't think I would want it to
be um sending emails without my like
check mark approval.
>> Yeah, like draft things, right? So,
>> we see this uh middleware agent like
pretty um you know very or sorry our our
basic agent has just a couple lines of
code. Um and this one is the same here.
Uh so this is using our new human in the
loop middleware which is very exciting.
Um, and I'll I'll talk a little bit
about what middleware is in a moment,
but uh, just kind of for this use case,
we see the exact same code signature
except we've added this middleware
argument. Uh, it has human in the loop
middleware and we're saying we want to
interrupt on the send email tool. Um,
and what that's basically saying is like
before we send an email, let's let's get
an approval first, right? Um, and I'll
pivot back to the um, studio studio here
in a second, but I'll talk a little bit
more about kind of middleware generally,
which is we see we've seen throughout
the like three years of lang chain
success that that core like model tool
prompt loop is really helpful. Um, but I
think we're also starting to see folks
wanting to build more complex
applications, right? like um you want to
have a really long running agent, you
might be at risk of like context
overflow because the conversation is so
long. So you need summarization or you
want to have an agent performing tasks
that um are a little bit expensive or
risky. So you want to build in human in
the loop patterns. Um and I think the
first approach that lane chain took to
these things was like let's build out a
bunch of kind of pre-built options for
all of those. like here's a rag agent
and here's a SQL agent and those things.
Um but again like the the main push for
langraph was we're seeing like drive for
customization. Um and so middleware is
kind of our way of approaching that
which is like here's your model and tool
loop and we know that that is going to
be the foundation and then middleware
allows you to like inject logic kind of
anywhere in that loop that you'd like.
>> That's super cool. What what um
something that's interesting is like you
showed the human and the loop
middleware. What other middleware
options do we have? Like so there's the
interrupt option. Do we have other
middleware options?
>> Yeah, definitely. So, we're going to
release um v1 of lang chain in a couple
weeks and we've kind of built it on top
of lang graph because we've seen that
like durable execution model kind of
under the hood really uh have successful
use cases for folks. Um and we're going
to release it with a couple of pre-built
middlewares. So, one of them is
summarization. Um and then a lot of the
other middleares are pretty catered to
like what we've seen is this emerging
trend of context engineering, right?
like you want to be able to control
exactly what's going into your LLM so
that it can perform really well. So what
that looks like is like uh
>> dynamic model middleware. So maybe like
based on the conversation length or your
users like subscription tier, you could
change the model or the provider um or a
dynamic tools middleware, right? like
>> maybe you want to give your model like a
couple tools at first and let it like
decide what path it wants to go down and
then give it more relevant tools. Um so
very excited about um the kind of
pre-built middleares that we're
releasing, but maybe even more excited
is like I can't wait to see what
community middleares people build um
because it's composable and pluggable.
Um so very excited for that kind of in
the vein of like open- source things.
But yeah, I can do a little demo here.
So I'm going to ask again if uh so just
adding a human message here and and I'll
note we have like the model request and
then you can see we're going to like
kind of check before we actually execute
those tool calls and then eventually
jump to end. So asking again if uh this
person wants to get coffee. Um, we see
similar requests like model makes a tool
call to the send email tool and you know
uh asks if city wants to get coffee. But
what I've realized here is that um I
actually am only free at 8 a.m. on
Friday, right? So I want to like maybe
edit the message before we send it here.
Um, so we see uh an interrupt here
that's been raised by the middleware and
it says tool execution requires
approval. You know, here's the content
and the to email. So, I'm going to paste
in uh that I want to respond and say,
please suggest that we meet at 8 a.m. so
that the email is a little bit um more.
I'm going to resume here. We can see it
goes back to a model request. Um, and
then the new tool call is edited like
would you be available to catch up over
coffee at Friday at 8. Awesome.
>> Very cool. Yeah.
>> And then I think that email looks great.
I'm going to accept it this time. Um, so
we have type accept resume and then
we'll see the tool is called and the uh
AI or the model responds with the email
inviting Sydney to catch up uh has been
successfully sent. Um, so very cool use
case here and I think we can kind of
like extrapolate this pattern to those
coding agents that we're all so obsessed
with, right? It's like here's my initial
plan for refactoring your codebase like
what do you think? and you're like
actually like let me guide you in a
slightly different direction before you,
you know, go refactor everything. Um,
but yeah, very nicely you can kind of
plug and chug and uh maybe I want to
interrupt on some tools but not others.
You can check the weather without my
approval, but maybe don't send an email
to my boss without my approval. Right.
>> 100%.
>> This is this is so cool. Like I've not
used this uh studio functionality
before, but it really does make I find
like people like really get confused
with understanding like well what's
actually a tool like when do things get
passed from like one thing to another
and like it really makes it really easy
to understand. I'm going to point more
people to leverage something like this
>> very easy to understand what's actually
actually happening.
>> Yeah. I think especially for like
longunning conversations where you're
like wait when did it call that tool?
Why did it call that tool? how did it
react to my feedback? It's super great
to be able to like go back and kind of
see the the traces. Um and and I guess
I'll I'll uh hype Langraph a little bit
up here. Yeah.
>> So, please do.
>> Our agent builder is built on Lingraph
under the hood.
>> Oh, very nice.
>> Lingraph seion.
>> Yeah, exactly. So we like again the
pattern was like lingch chain uh with
kind of all these pre-built things and
then we were like well folks really need
more control so let's release graph
which is a tool that allows you to kind
of build these graphs from scratch like
I'm going to add these nodes and these
edges and these conditional edges and
that's great but it is a little bit more
complex right it it's like little bit of
a higher um threshold to like get
started a higher activation energy if
you will um and so now kind of our Third
generation is like under the hood lang
graph stuff is great but let's make it
really easy for people to get started
with that like two lines of you know
create agent code. Um and then the
benefits that folks reap from the
lingraph runtime. Um I'll list off a
couple things. So one of them is the
human in the loop um like interrupt
functionality. Um another is what we
like to call durable execution. Um, so
let's say this model request failed
because my model provider is like having
API issues. Um, these like steps are
checkpointed and so I could actually go
and like retry or replay from any given
step. Um, which is really nice. Um, it's
also nice like if I didn't like the
direction that the model was going in, I
>> step back. Right.
>> Yeah, 100%. Finally, and this is like a
huge thing for agent frameworks is
streaming is like very much a first
class citizen here. Um because models
can be really slow, right? And it's not
very engaging for users if they don't
get like real-time feedback about
decisions the model is making etc. So um
lingraphph is great at streaming events
like okay we jumped from model request
to the human feedback node to the tool
node but then also like llm responses
can be streamed as well.
>> You mentioned that there will be a
couple of middleares that will be like
provided like y'all have worked on.
Curious if you can speak to how you
decided what that list of of middleware
would be.
>> Yeah great question. So we had a couple
of ideas um initially. So like human in
the loop and summarization I think were
the first two that we were like we see
so much need for these. Um but I think
the way that we decided on other
valuable ones is really cool. We have
this uh what we call applied AI team
internally at Langchain and they are
building really cool things. So um a
couple of projects they've worked on one
was open sui which is basically like a a
coding agent built on top of lang graph
um or most recently they developed an
open- source package that we will
continue to promote called deep agents
um and deep agents again uses all these
patterns under the hood but um it uses
kind of like a planning middleware um so
when you're using your um coding agents
you'll often see that they make like
to-do lists for themselves,
>> right?
And cross them off just like we as
humans are also like
>> better at getting things done if we make
lists and uh keep track.
>> And so after those like foundational um
middlewares were decided on, we ported
our deep agents logic and other like
applied AI projects over to the
middleware system um and and tried to
figure out like what middleares do these
need um because these are like you know
agents with real use cases. So we added
a a planning middleware for example um
or a file system middleware. Um so yeah
mostly it was like porting over our like
internal applied AI use cases to see
like what what would be helpful. What
was there one that isn't on the list but
you are excited to see the the community
contribute?
>> That's a great question. Um so we have a
couple of guardrail middleares in the
works. I think guardrail is a pretty
like buzzy term right now because of
course uh LMS go off the rails and you
need to kind of control what and what
comes out.
>> Um but it's hard to like cover all the
cases, right? And so I'm really
interested to see like what specific
guard rails are folks invested in having
on either side of their models. um you
know like we can kind of do the classic
like let's not send PII to the model or
let's make sure that um you know the
model is like not saying anything
inappropriate um or like bad about you
know competitors or things like that. Um
but excited to see like more applied
cases.
>> Yeah, very cool. Something that I think
the middleware would be interesting for
and I've seen it as well is like the
dynamic uh tool filtering part as well.
I think particularly for MCP just
because that's something that probably
one of the the biggest complaints people
have for MCP is when you know developers
are creating their MCP servers they'll
like build in all these tools just like
converting from an API and then you'll
have this overload of tools and so I
think having that middleware for the MCP
part would be is is really great and um
being able to actually filter. I think
you mentioned a dynamic tool filtering
or dynamic something with the tools,
right?
>> Yeah, Marlene, you're reading my mind.
So, we have
>> we have sort of two um tool u middleares
in the works.
>> One is I guess we have we have these
pre-built middlewares to make it easy to
kind of plug and chug with uh common
patterns. And then we also make it
pretty easy to um customize middleware.
It's really just a series of hooks. Um,
so that's kind of where we're excited to
see the like community development. Um,
but,
>> uh, I think we see a common use case for
just wanting to control the set of tools
available at steps. So you can do that
by just kind of hooking into um, the
like prepare model request step. Um but
the other thing we've seen, yeah, with
the like uh with agents that have access
to a huge number of tools, basically a
number of tools that would um result in
context overflow to like be able to
specify. We have this like we used to
call it like a big tool pattern.
>> Yeah. Uh,
>> but it's basically like let's use an LLM
before our main LLM to select the tools
that are going to be appropriate for
some task and then feed that context
into our like main LLM. Uh, so right now
we're calling that the LLM tool selector
middleware. If anyone has better names
for that one, would love to shed on
that.
>> But yeah, it very cool.
>> Naming things is hard. No one's gonna
figure that one out. Yeah, honestly it's
one of the toughest things in software.
I think something else that you wanted
to ask as well is have you seen so you
talked about deep agents which I also
think is very interesting but like have
you what are your have you seen like
interesting use cases for agents or like
what would be your favorite use cases
for agents that you've seen? I think the
email use case you shared just now is
really cool. Um, but what other
interesting stuff do you think is is out
there?
>> Yeah, man. There are so many. It's hard.
It's like there are some that I
personally would love. So, like email
assistant, fantastic. Or just kind of
any sort of like personal assistant
along those lines. Maybe like uh content
generation assistants are interesting. I
haven't toyed around with that myself. I
would like to uh soon. Um, another one
that I'm personally interested that's
kind of a small scale thing is um I love
like athletics and running and that sort
of thing. And I think um I don't know if
either of you are Strava users, but uh I
would love to give agent access to my
like Strava account and have it like
make plans for me. Um very cool.
>> So those are kind of cool like personal
assistant ones that I think are cool.
But in the broader scale um I don't
know. I think like agents applied to
health tech are very interesting.
>> It seems like a place where like if you
can give doctors and nurses etc like
more time to focus on patients then that
is exciting. But would love to hear what
you guys think about kind of promising
or exciting use cases.
>> Okay.
>> Go ahead Marlene.
>> I was just going to say the Strava one
is cool. Uh, and and I would love an
agent that just posts and does stuff on
my Straa because I don't want to look at
Straa because I feel like I feel bad. I
feel like I'm competing with everyone
on Straa. So, I just want the agent to
just do update and like people's like
auto like people's stuff, but I don't
want to actually be there. So, I think
that would be kind of cool to have. Um,
but yeah, I don't know.
Have you seen anything interesting like
recently like agent use cases?
>> Um I saw one yesterday literally was
watching a YouTube video. I'm like I
need to go do this for myself today. So
after this I'm going to go play around
with that. Uh do you either of you use
Obsidian or have you heard of Obsidian
the note takingaking?
>> I've heard of Obsidian. I don't use it.
No. Yeah.
>> So it's essentially which saves your
notes in Markdown, right? And we know
that LMS love markdown. So you kind of
centralize your notes in one place and
then depending on the task you're trying
to accomplish, you'll have a specific
agent tailored to accomplishing that
task. Like if you're like I for for my
particular use case, I like using
Obsidian to write out like talk tracks
for presentations as I'm like putting
slides together. So I would like
something that can help me figure out
like, oh, does do does that the text on
the slide plus the talk track that I'm
saying make sense? Like are they
cohesive? And then I could have one be
like a researcher like, "Okay, how could
we actually like go look at like all the
information that I've kind of like brain
dumped into it? What do you think is
most relevant to like put in here?" Like
stuff like that would be so awesome. And
they were using um
uh I think it was Claude Claude code. So
like the CLI and then they were just it
was all in the terminal. It was sick. It
was all in the terminal like uh like
quick commands all the markdown is there
like file like creating editing stuff
like that. So I'd be cur we have a
shameless plug. We have the GitHub
copilot CLI and that's out now. So I'm
definitely going to try to see how I can
how I can do stuff with that. Yeah. And
then the other agent thing that I think
is could be interesting is like um I do
a lot of stuff with with developers who
primarily speak Spanish and like our
their codebase is you know because the
tech world isn't English. the code base
is in English, but a lot of the the the
comments and like the documentation is
in Spanish and a lot of their
interactions with these tools are in
Spanish and it's not like one like v1
like comparison like English it's like
these tools are going to understand
better if you can interact with them but
like if that if the language barriers
are thing right so perhaps there could
be an agent that
>> and I know our our lead Anthony was
talking about this as well it's like is
there something in the middle that we
could put between the person's like
native language what
in the middle there. So, uh, you know,
whatever AI is at the other side has
like a better better chance at like
really understanding what they're saying
like like how do we improve that
language barrier between dev speaking in
their native language and these tools
like actually generating what what we
expect them to do. Um, so I think stuff
like that could be could be really
really cool. That's a really cool use
case. I feel like that also sounds like
something where like having like evals
and scoring for like how good is that
like translation middleware or vware you
know but um that's a really cool use
case.
>> Yeah, definitely. I I we had a couple of
like sort of open-source maintainer
questions we wanted to ask before we we
end this uh
>> great
>> this chat here. Um,
I've seen this a lot on like social
media of like people talking about
or like complaining or you know there's
all sorts of opinions on this, right? I
don't want to like criticize anyone
because you I know open source maintain
maintainment is is hard, right? Um,
but there's this thing about people
submitting PRs and the majority of the
work is done like it's so obvious that
it is done with, you know, some kind of
AI tool like curious as a maintainer,
what are your what are your thoughts on
that? Yeah, so my thoughts have really
evolved here. like a year to a year and
a half ago, I was like pretty skeptical,
maybe even borderline annoyed by like
very AI generated PRs. Um, but my
general open source philosophy is like
this kill them with kindness approach
where like I like to assume that people
have good intent and like want to learn
and that sort of thing. And I think with
um coding agents getting much better, I
appreciate anyone who's taking the time
to like look at an open source
repository and try to make
contributions. I think the thing that I
appreciate the most is like if you want
to submit a PR that has like clearly
used AI, that's okay. But I hope that
you're like dedicated and engaged in
like making it um you know getting it
all the way across the line, right? like
maybe the AI agent can do a good job for
the first 80% and then hopefully you're
like engaged for that 20%. I think if
you don't want engaged for the like 20%
of iterations then it's like getting
better. So
>> yeah, I wonder if you've thought if
there are any like guidelines or like
updating like your contributor guide us
as to like things like that. And again,
I'm just asking these kind of like on
the spot in just curiosity of myself, so
maybe you haven't thought about it, but
yeah, I just think of like these are
tools. I mean, okay, here's my my real
opinion.
Everyone's using AI to code. Like it's
like there's not going to be a PR like
you can like update the docs to like
make it not or like the comments in
order to make it not look like AI or
whatever like less emojis. Don't use the
line by line
>> like don't no like don't use the what is
what are they called the double dash
>> oh yeah the M dashes. M dashes.
>> Yeah. Don't use that and then okay
there's no real way to tell cuz but the
reality is like we're all using to some
extent. So like what even is an AI
generated PR like you know. So that's
like my real opinion and then I think a
lot of it is on just like us as you know
people who like are like core
maintainers or even in like as like
leads on a team like your your goal is
to now make sure that you have in place
tools that or like guidelines or you
know CI stuff or like whatever it is
test everything to like make sure
there's like this this this code quality
issue has been a thing forever right
it's not like
>> it was like oh suddenly we have to worry
about the like we've always had to worry
about it. So I'm curious of you thinking
about like how can you update those or
you can add things like that for for
these contributions.
>> Yeah. In some ways it feels like
analogous to the challenges that like
the education industry is seeing right
where it's like okay students are going
to use AI to like help them with things
and you might as well like acknowledge
that and then like set guidelines in
place. Um, so we're releasing like a new
roll out of our docs uh with
consolidated docs for all of our open
source libraries, which we're excited
about, which will come with a new
contribution guide. Um, oh, awesome.
>> We actually don't think that we've
updated it to reflect uh kind of
guidance with this, but we definitely
should. So, I I will do that. And, um,
yeah, I mean, I think like leaning into
it. Uh this even ties back to our like
initial conversation about like what's a
great way to like get started with open
source as a like new and young and
developing engineer. And I think it's
important to realize that like the
differentiator for up and cominging
engineers is not going to be like
very very technical. Uh I think it or I
think it's going to be like soft skills
and strategy and like ideas um and
things that are not just like things
that AI can do for us.
>> No, not just lots of code.
>> And so I think like you know being a
good open source contributor now
probably means like using coding agents
and then also like writing up really
good docs and engaging issues in ways
that are like helpful for maintainers um
and things like that.
Yeah, I think the one thing that I will
say I'll add on to that is that I think
we see some of these pastins. So, for
example, like right now it's it's
October which means October 5th is going
on.
>> I I always see maintainers complaining
about October 5th because the incentive
is not correct. like the incentive is
like I just want to get a PR in even if
it means I'm just editing out so that
the E is um moving the E to the side or
something like that you know and and
like those sorts of PRs you can see the
person isn't putting much effort into it
they're just doing it for to get a PR
in. So I think the same is there with
AI. You should even if you're using AI
like you should the human element needs
to be there. We should be able to see
that you've put some thought into what
you're putting into the into the
codebase especially for more you know
important for larger code bases. It's
really important. So I would say yeah I
agree with the idea of updating the
contribution guide. Um and and I think
Sydney as well like treating people with
kindness assuming good intent unless we
can really see minimal effort was put in
here and there's a rocket emoji and like
I think yeah
>> the rocket emojis
>> it's
>> I love the rocket emoji
>> I think there's a good way to like still
be kind to people like I think some open
source maintainers are like this is AI
generated like how can you do this like
you know, and it's like I feel like it
can be a little better like
>> Yeah.
>> Haha. Like, you know, a little too many
emojis on this one. Like, we're going to
click now. You know, like there's like a
better like
>> kind of way to do it.
>> Yeah.
>> Yeah.
>> Okay. So, we're going to expect lang
contribution guide. No rocket emojis.
No.
>> I didn't say no rocket emojis. I think
>> awesome. This has been This has been
great. I don't know, Marlene, if you
have any final questions.
>> I don't think we do. We're super excited
about V1. When can people expect to use
V1? Do we have Can we say a date or like
a range time range for when people can
expect it?
>> Yes. So, we are just a little under two
weeks away from when we la. So, um
>> Amazing.
>> Yeah. The the week of the 20th is when
we're expecting to launch, but uh alphas
are available now, so we'd love if
people would go and try them out. Um,
>> yeah,
>> sweet. We'll have links in the
description for everyone to go check
that out. Sydney, thank you so much for
hanging out with us and uh we would love
to have you back on sometime in the
future so we can talk about more stuff.
>> That would be lovely. It was so great to
great to chat with you guys.
>> All right, thanks everyone.
>> Thanks everyone. Bye.
What is LangChain V1, and why are so many developers talking about it? In this episode, Gwen and Marlene from Microsoft’s Python Developer Advocacy team hang out with Sydney Runkle, open source dev @LangChainAI, to talk about what’s new in LangChain V1, how LangChain actually works, and what’s next for open source AI tools. It’s a fun, hour-long chat packed with insights, demos, and good vibes, perfect for Python devs building with LLMs. 🔎 Chapters: 00:00 Introduction 00:27 Welcome Sydney Runkle 01:18 Sydney's open source journey 06:32 LangChain and LangGraph overview 08:59 What is LangSmith 09:50 Demo 29:34 Q&A 46:59 Wrap 🔗 Link: https://www.langchain.com/ https://aka.ms/LangChain/Blog 🎙️ Featuring: Gwyneth Peña-Siguenza: https://x.com/madebygps Marlene Mahngami: https://x.com/marlene_zw Sydney Runkle: https://x.com/sydneyrunkle 📲 Follow us on social: Blog - https://aka.ms/azuredevelopers/blog Twitter - https://aka.ms/azuredevelopers/twitter LinkedIn - https://aka.ms/azuredevelopers/linkedin Twitch - https://aka.ms/azuredevelopers/twitch #azuredeveloper #azure