Loading video player...
Langraph is an extension of Langchain
that builds on top of Langchain's
foundation to give additional features.
And you might be wondering, wait, why
not just use Langchain? Why are you
making this more complicated for me?
Well, that's good because complicated is
the perfect word to capture the essence
of understanding the true difference
between Langchain and Langraph. So, you
might be asking the question, what's the
true difference between Langchain and
Langraph? If you're building a simple
chatbot that answers customers questions
based on company's policies, Langchain
will most likely suffice in getting the
job done since they're built for simple
and deterministic tasks. However, some
business requirements go way beyond a
simple company chatbot. Let's say you're
asked to build a deep research assistant
software for your company that helps you
go through large swath of information
gathered from various sources. In this
case, the use case is a lot more
complicated than a chatbot and using
Langraph will start to make more and
more sense. One way to look at it is the
threshold from changing from lane chain
to lang graph really comes down to a
component that's called state graph.
Essentially when you use state graph you
have the ability to add what's called
nodes and edges. A node is an individual
unit of computation. So think of a
function that you can call and an edge
is a transition between these nodes that
can either pass through or be
conditional. So let's go back to the
deep research assistant as an example to
understand the difference with a little
bit more granularity. Let's say the
business requirements for the deep
research assistant was to first browse
the web, then find relevant details
about a given topic, and let's say the
research requirement was about Tesla's
earnings call. And then the assistant
needs to read and comprehend all new
sources from blogs, forums, research
papers, and social media. And finally,
decide if the information contained is
trustworthy and useful. And each source
of information needs to surpass 75% in
order for it to be deemed as
trustworthy. And the final step for the
assistant is to then gather all the
credible data and build a report. In
traditional software development, you
not only have to write code that first
fetches a set of links using a search
engine API. Second, loop through these
links manually. Third, scrape the
content and feed it to a large language
model. Four, evaluate the score for each
source. And five, check the score and
only use sources that surpasses 75%. And
six, analyze and store these facts into
a report. You not only have to write all
of these code individually but also
orchestrate the sequence of how these
code will run in order to maintain them.
Now with lang graph the steps look a
little bit more streamlined. The entire
process can be run using a graph where
each node is responsible for a very
specific task and each edge determines
the flow or execution steps. So in our
case we need to create nodes for the
following tasks. First a note to search
and gather sources. Second a note to
scrap and clean content. Third, a node
to evaluate trustworthiness using an
LLM. Four, a node to extract factual
statements from the sources. And five, a
node that generates a report. And once
all these nodes and edges are configured
and compiled, Lang graph will
orchestrate them by executing them based
on how it's configured. So for a deep
research assistant, the graph will look
something like this, where you have the
starting node that serves as the entry
point and all the nodes and edges that
do individual tasks and finally an end
node that terminates the workflow. Now
what makes lang graph special is what's
called state graph meaning they all have
a shared state. A state essentially
serves as a persistent memory for the
workflow to store pertinent information
at all different parts of the workflow.
So in our case of deep research
assistant the state might look something
like this. Class research states topic
that's a string remaining URLs which is
a list of strings. Current URL which is
an optional string. Content which is an
optional string. Current score which is
an optional integer and facts which is a
list of strings. So now that we have the
state and the graph, let's actually see
how Lang graph would execute them step
by step. The topic in this case will be
Tesla's earnings call. So the first node
that gathers new sources and sites looks
at the topic state and gathers
information about Tesla's earnings call.
It'll then populate all the results it
got to a state variable called remaining
URLs. The next node will scrape and
clean the content from each URL and
populate the state variable called
current URL and the content that's
within it so that it can be further
processed by later node. The next node
will evaluate the trustiness of the
information that was gathered and make
sure that it scores it properly, appends
it to the state variable called current
score. And once all the URLs are scraped
and scored properly, it will then go to
the next node that extract factual
statements from all these sources. And
finally, the last node will generate a
report based on the facts that are given
within the state graph. So as you can
see state graph plays a critical role in
persisting information within the
workflow and it's an important piece in
orchestrating of the workflow and it's
through this nature of graph part in
langraph that provides additional
features like loops conditional
branching and state managements that
helps you build a more complicated
application than what lang chain might
offer out of the box. As enterprise
adoption of agentic software grows,
tools like lane chain are a natural
progression towards workflow automation
and understanding when and how to use
langraph can help you solve very
interesting problems without having to
write unnecessary code. Langraph really
just helps you focus on architecture and
problem solving rather than how to
implement the orchestration and how each
component should run. Now that we've
covered the conceptual elements of
langraph, let's look at how it looks
like on a practical level. To better
understand this, we can look over at
this lab specifically geared towards how
to use Langraph. All right, now let's do
some hands-on work and actually build
this research assistant together. Go
ahead and copy the installation command
from the first section. We're setting up
our complete langraph stack here. This
includes langraph itself, Langchain core
packages, state management tools, and
most importantly, duck.go for free web
searching. No API keys needed for that
one, which is perfect for our lab. Run
the installation. And while that's
executing, notice how we're also
installing beautiful soup for web
scraping and the open AAI integration
through our proxy server. Once your
installation completes, we explore the
fundamental difference between
sequential chains and stateful graphs.
On the right side of your screen, you
have VS Code where we'll be reviewing
and running our code. Navigate to the
/root/code directory and you'll see
folders for each task we'll be working
through. Let's start with understanding
sequential versus stateful processing.
Open the task two folder. You'll see
three Python files there. Let's look at
sequential chain.py first. This
demonstrates a traditional lane chain
approach. Notice how it creates an LLM
instance using chat open AAI with our
proxy configuration. The chain processes
three steps independently. It first it
greets a person named Alice. Then it
says goodbye. But look closely at line
36. The farewell prompt doesn't even
receive the name. Finally, it tests
memory by asking what the person's name
was. and predictably it has no idea
because each step is completely
independent. Now open up stateful graph
in the same folder. This is where things
get interesting. Now look at how it
defines a conversation state class using
type dictionary starting at line 13.
This state structure persists throughout
the workflow. The graph has three notes.
Greet person, say farewell, and check
memory. Notice how the farewell function
on line 42 can actually use the name
from state. It says goodbye to Alice by
name because the state is preserved.
Let's run these to see the difference in
action. In your terminal, execute the
first script by running the Python
command and specifying sequential
chain.py file. Watch the output. It
processes each step independently. The
greeting mentions Alice, but the
farewell is generic. And when asked
about the name, it has no memory. Now
run the stateful graph script. See how
different this is. The farewell
specifically mentions Alice because the
state was preserved. The memory test
confirms the name is still accessible.
Finally, run the comparison script to
see both approaches side by side.
Now, let's dive into state graph, which
is really the heart of langraph. Open
the task 3 folder and look at state
graph demo. This shopping cart example
perfectly illustrates state persistent
and accumulation. The code defines a
cart state with items, total, and status
fields. Watch how each node doesn't
replace a state, but adds to it. The add
apple function starting at line 18 takes
the existing items list and adds apple
to it. It doesn't replace the list. It
creates a new list with the existing
items plus the apple. The add banana
function does the same thing adding the
accumulated state. And now run the demo.
Watch the output carefully. You'll see
the state grows at each step. First, the
cart is empty. Then it has an apple with
$5 in total. Then both apple and banana
with an $8 total. And finally, the
checkout adds a paid status. The status
persisted and recumulated throughout the
entire workflow.
Let's explore nodes in detail. Open task
4 and examine nodes demo. The code
demonstrates four different node types.
The simple function nodes just transform
data. They might uppercase text or
perform calculations. LLM powered nodes
use language models to generate content
or make decisions. Tool using nodes
reach out to external services like web
searches or databases. Conditional nodes
examine the state and decide where the
workflow should go next. Now run the
nodes in the demo. Each node serves a
specific purpose, but they all follow
the same pattern. Receive state, process
it according to their function, and
returns updates. They're all like
specialized team members, each with
unique skill for edges and routing. Open
task 5 and look at edge routing demo.
This demonstrates how to control
workflow with conditional routing. The
router function examines the state and
makes decisions about which path to
take. Look for the add conditional edges
call. This is where the intelligence
happens. Based on the router's decision,
the workflow follows different paths.
Now, let's run the demo. Watch how the
workflow chooses different paths based
on conditions. This conditional routing
transforms a simple pipeline into an
intelligent system that can adapt to
different situations. Now, let's explore
loops and iterations. Open task six and
examine loops demo. The key here is the
should continue function. It checks both
an iterator counter and the quality
score to decide whether to loop back or
proceed. This prevents infinite loops
while allowing iterative refinement.
It's like doing research yourself. You
search, evaluate what you found, and if
not good enough, you search again with
better keywords. Now, let's run the
loops in the demo. Notice how it
searches, evaluates quality, and loops
back if needed up to a maximum of three
iterations. Each iteration can be
different. The search query can be
refined based on what was learned
previously.
Tool integration is what connects your
workflow to the real world. Open task 7
and look at tools demo. The search tool
node function shows how to integrate
duck.go. It takes query from state,
searches the web using DDGS, processes
the results to extract relevant
information and adds them back to state.
The beauty is that to the workflow, this
tool node looks just like any other
node. Watch how seamlessly external web
search is integrated into the workflow.
No special handling needed. It's just
another node doing its job.
For memory and state accumulation, open
task 8 and examine memory demo. This
demonstrates how state builds knowledge
over time. The memory state class
defines lists that accumulate data,
questions, search results, key points.
Each node adds these lists rather than
replacing them. Now look at the
accumulator pattern. It reads existing
data, generates new data, combines them,
and returns the accumulated result. Now
let's run the memory demo. See how the
knowledge builds up step by step.
Questions are generated and stored.
Search results are added without
removing the questions. Key points are
extracted and added alongside the
existing data. And finally, everything
is synthesized into a report while
preserving all the intermediate data.
Finally, let's look at the complete
research assistant. Open task 9 and
examine the research assistant file.
This brings everything together. The
research state includes all the fields
we need. topic questions, search
results, findings, iteration count,
quality score, and the final report. The
workflow includes notes for each step of
the research processes, conditional
edges for quality based routing, and a
loop that allows iterative refinement.
There's also a streamllet app file that
provides an interactive web interface.
If you want to see the workflow
visualized in real time, you can run
this command. But let's run the command
line version first. So give it a
research topic and watch the entire
workflow execute. You'll see it generate
multiple research questions based on the
topic. Search for information on the
question, evaluate the quality of what
it found, potentially loop back with
refined searches if needed, and finally
synthesize everything into a
comprehensive report. What you've built
here is fundamentally different from
simple chatbot. This assistant can adapt
its approach based on what it discovers
and also refineses the searches based on
initial results and build comprehensive
knowledge from multiple sources. Each
component we explored state graph for
workflow management, nodes for
processing, edges for routing, loops for
refinement, tools for external data, and
memory for accumulating knowledge plays
a crucial role in creating this
intelligent system. The validation
happening behind the scenes confirms
each step is working correctly. You've
successfully built a productionready AI
research assistant using Langraph's
powerful workflow capabilities.
[Music]
🧪Try LangGraph Hands-On Labs for Free - https://kode.wiki/41WTH62 Learn how LangGraph transforms simple LangChain chatbots into powerful AI agents with StateGraph, loops, and conditional workflows! In this comprehensive video, we'll show you exactly how to build a production-ready AI Research Assistant that searches the web, evaluates trustworthiness, extracts facts, and generates intelligent reports using nodes, edges, and shared state management. Ready to build your own stateful AI workflows? Access our FREE interactive labs where you can experiment with real LangGraph implementations, create your own StateGraph architectures, and see agentic AI workflows in action! 🧪Try LangGraph Hands-On Labs for Free - https://kode.wiki/41WTH62 📚 What You'll Learn: • LangChain vs LangGraph: When to use each framework • How StateGraph works with nodes, edges, and persistent memory • State management patterns for production AI agents • LangGraph Workflow Demo ⏱️ Timestamps: 00:00 - Introduction to LangGraph 00:20 - LangChain vs LangGraph: What’s the real difference? 01:19 - Deep Research Assistant Use Case Example 01:53 - Traditional approach pain points 02:21 - Orchestration in LangGraph 03:08 - What is StateGraph? 03:39 - LangGraph Workflow 04:42 - LangGraph Adoption in Business Requirements 05:16 - Demo - Installing LangGraph Ecosystem 05:50 - Demo - Sequential Workflow vs Stateful Workflow 06:29 - Demo - Chunking Strategy and Embedding 07:37 - Demo - StateGraph 08:29 - Demo - Nodes, Edges and Routing 09:38 - Demo - Loops and Iterations 10:15 - Demo - Tool Integration 10:45 - Demo - Memory and State 11:27 - Demo - Build Your Own Research Assistant 12:52 - Conclusion & Free Lab Access 🔔 Subscribe for more AI tutorials! #LangGraph #LangChain #StateGraph #AIagents #AgenticAI #AIworkflows #AIautomation #LLM #OpenAI #Python #AutonomousAgents #workflowautomation #kodekloud