Loading video player...
Microsoft recently released its Syenic
framework and it's been gaining a lot of
popularity. This is an absolute
beginner's guide to building AI agents
with math framework. So if you haven't
built an AI agent before, don't worry.
We'll cover all the basics of LLMs and
AI agents before we dive into math.
We'll start with the basics of AI
agents. We'll understand what agentic
systems are. Where do frameworks like
math fit in? We'll then set up an
environment with math. Then we'll build
the first AI agent. We'll then move on
to function tools as AI functions. And
as always, this is not just a theory
course. Every concept is accompanied by
a hands-on lab that opens up right in
your browser. We give you partially
written or broken code and ask you to
fix it and we'll validate and give you
feedback instantaneously. This approach
of challenge-based learning helps you
learn more efficiently. I'll let you
know when you're ready to start your
very first lab challenge. Let's begin
with the first topic. So, what is an
agent really? Let's start with the
absolute beginner's guide to AI agent.
Let's start with something that we
already know. If you were to ask chat
GPD a question, it gets back to you with
the response instantly. So, the chat in
chat GPD is the application. The GPD is
the LLM or large language model from
OpenAI. There are so many other models
out there like Gemini from Google,
Claude from Anthropic, etc. You ask a
large language model a question and it
responds back with an answer. As simple
as that. The chat application that
simply interacts with the LLM and
returns a response is not an AI agent.
That's just a chatbot. What if the
application could understand the user's
requirement, break down inputs and
identify the user's intent, create a
plan to solve the user's problem, and
interact with third party tools and take
action on behalf of that user? What if
this application could think, decide,
and act? That's when it becomes an AI
agent. The thinking part is the ability
to reason through LLMs given a context.
The deciding part is the ability to
break users query into multisip
execution and prepare a plan. And the
act part is where the AI agent interacts
with the outside world through APIs,
through function calls or tools or MCP
as we will discover later and take
actions on our behalf like booking a
meeting or reserving a hotel or even
booking a flight ticket. So as you start
adding more functionality to your agent,
it would start getting bloated with
multiple integrations such as too much
exchange of data with the APIs and you
require your agent to remember certain
details. You want to be able to log all
interactions so you can monitor things.
You also want to be able to debug issues
easily when things break. And all of
that create a lot of code that leads to
higher code maintainability. and your
code ends up looking like this.
This is where Microsoft agent framework
comes in. It helps developers structure
and orchestrate these agents just like
how React helps structure UI logic if
you're familiar with front-end
frameworks. So, it provides structure
for reasoning and planning. It provides
reusable tools and skills to interact
with the third parties. It provides
capabilities to manage memory and
context. It provides support for
observability, logging and debugging and
it helps you define a clear life cycle
for your agents. So an agent build with
complex piece of code that looks like
this would look like as simple as this
with an agent framework like math. So to
summarize what really is the difference
between chatbot and an agent? A chatbot
is basically your question in answer out
system. It responds. It doesn't think
ahead. It doesn't remember what happened
2 minutes ago. It doesn't plan. It
doesn't decide. And it definitely
doesn't take actions on your behalf.
It's reactive, not proactive. Agents, on
the other hand, are like giving your
chatbot a brain, a memory, and hands.
They can think using LLM reasoning. They
can remember past steps. They can plan a
multi-step workflow. They can use tools
to interact with the outside world. They
can take actions instead of just, you
know, giving plain answers. They can
work proactively to achieve the goal you
gave them. All right. So, this section
is about getting started with math and
getting the environment setup out of the
way. But before you do so, it's kind of
very important to answer one specific
question that what kind of an agent do
you need? because math offers different
subclasses of agents which you might be
interested in using. So it's kind of
important to know that what each of
these classes offer. So we'll briefly
run through them and once we're done
we'll dive in straight into installing
the right kind of Python packages and
pip packages and once we're done we'll
set up the environment variables and
build our very first agent. All right.
So agents can be of many types right? So
you can have a general purpose agent,
you can have a chat focused agent and
maybe you just need an agent which
leverages some AI services within the
cloud. In Microsoft's case, Microsoft
has Azure cloud which offers a lot of
different AI and or machine learning
services which you can leverage. So in
simple words, if I have to say general
focused, I would just say, you know,
maybe I just want to build an agent that
answers not so specific questions. And
when I say chat focused, maybe I just
need an agent that automates a specific
chat workflow. And for as your AI
services, maybe I just need an endpoint
to uh my specific Azure AI service. So
knowing your use case is kind of
crucial. So this is the structure that
math offers. AI agent is the main base
class and everything else follows it. So
chat agent is specialized for
conversation flows and Azure AI agent is
optimized to run Azure AI services. So
chat agent is something that we'll be
using excessively in our upcoming uh you
know sections. All right. So very next
thing we need to do is installation and
setup for Python. Firstly we'll create a
virtual environment. Uh in Python, a
virtual environment is more like a
sandbox. It's more like a bubble in
which you'll install all your
dependencies, all your packages. And
once uh this virtual environment is
activated, you'll go ahead and install
the pip package for agent framework.
Just make sure that you have Python 3.10
above installed. And after that, we need
to set up our environment variables. So
environment variables are are very
specific uh to what kind of model you're
using. So in my case, I'm using GPD4
mini and I have a uh OpenAI API key
associated to that. But feel free to use
your own models and in a bit I'll also
introduce you to one of the code clouds
um key products key products because
it's called code key. So you have one
key and against that one key you can use
multiple models. So let's get into our
terminal and let's get started. All
right, so I'm in my terminal. The very
first thing I need to do is
create a virtual environment. Next, I
need to activate it just to make sure I
need to check my Python version, which
is 33.9.
We're good to go. And let's go ahead and
install
Asia Framework.
and we are good to go. So let me
introduce you to the codebase. So here
we have a couple of files. Uh the
virtual environment file is Python
generated. I've got nothing to do with
it. Uh these are a bunch of folders
which we'll be exploring in the upcoming
section. Uh here's my env file. It has
all my keys. Uh Python version is again
uh Python generated. And we should have
a requirement txt because uh uh whenever
you're building a project, it's really a
good practice to put all your
dependencies within this file so that uh
every single time you don't have to
install packages haphazardly
uh you would have all your packages here
with their versions. So it's kind of a
good practice. So uh this is what we
have and let's get started with our very
first hello world agent using math. All
right. So let's go ahead and build our
very first agent using math. And here
are the contents for this section. So
we'll start with a basic hello world
agent. Very simplistic stuff. And it
would be more like an explainer video
where I explain how you can provide a
specific input to a particular agent and
how you can fetch information out of
that agent. Lastly, we'll conclude with
a very important debate about agents
with memory and agents without memory.
But before that, let's recap the
simplest explanation of an agent. And I
hope you remember this diagram from
section one. So agents are something or
an entity that can think, decide and
act. But formally this becomes
context, memory and tools. So agent is
an entity that has a prior context and
when I say context I mean knowledge. It
can be internal knowledge through
instructions or external knowledge. For
example, it can be a database. It can be
a bunch of files. Memory is something
that you choose to save and most
probably they're going to be your
conversation threads. And tools as we
also talked about it briefly in the very
first section. They're more like
connector to the outside world. They
really help you connect to third-party
APIs or you know third-party tools that
you can leverage. So that's the goal for
an agent and here's a scenario for you.
So we're going to build like a flight
planner agent and here's more like a
widget of it. So let's say I shoot off a
message saying, "Hi, my name is John
Doe." The agent responds, "Hi John, what
can I do for you?
And I'm like, I live in Paris and I
would like to book a flight to London.
So all the conversations after this
message, I would like my context to be
stored in terms of memory or whatever,
but I would want my agent to know that
my name is John, number one. I live in
Paris, number two, and that I'm
temporarily
flying to London.
So there are like three variables at
play here.
So how we can build a simple agent is by
leveraging open AI response client. And
uh what you see on screen is like a
bunch of import statement. We're
importing OS which is more like a OS
package for Python. You can leverage a
lot of file system functions through it.
Async Q is uh it just helps us you know
build um async workflows, async methods
and env sort of loads all of our
environment variables in our current
execution which we can leverage through
uh the following statement that you see
and the purpose is obviously just you
know to fetch out our keys securely. All
right. So once you have your chat client
in place, you need to hook it up into a
chat agent. And if you remember from the
previous section, chat agent is a
subclass that specializes into
conversational workflows. So you give it
a set of instruction. In my case, I just
give it like, you know, you can make it
verbose, but I just said you search for
cheap flights and that's enough for me.
And I give it a name. And next, I run
the agent and I give my query to it. And
it's pretty simplistic. So, let's go
ahead in our code and see this in
action. All right. So, I'm in my VS Code
and here's my code. It's pretty much the
same code. All I did is change a few
things here and there. I added a new set
of instruction for my chat agent and it
says you're an assistant that explains
frameworks clearly and I've named it
explainbot.
And my question that I pose is explain
briefly about Microsoft agent framework.
So let's go ahead and run it and see
this in action. So all I need to do is
type in Python and my file is basic chat
agent.py.
So we wait for a bit.
It's going to do its async magic and
boom, here we go. So uh the agent reply
is Microsoft agent framework is a
software development platform
uh to create interactive and animated
characters agents. So you see what's
happening. This is a mess and we need to
fix this. So according to our agent, our
simple agent, Microsoft agent framework
does character animation, speech
recognition and synthesis. So it's still
sort of a linear thingy, right? So it's
not thinking through. It's being
reactive. It's just, you know,
hallucinating and just giving me a
response. That's a perfect example of
what I wanted to show you. Like
currently I can say that this is just a
glorified chatbot and we need to fix it
by adding in tools and memory and maybe
you know uh a couple of verbose
instructions. So yeah let's do that. But
before we do that
uh I just want to I just want you guys
to appreciate the fact that uh you know
uh chat lines and agent there is a very
thin line between it and the only thing
that really differentiates them apart
from each other are tools and memory
itself and the way we use it obviously.
Um but there's one more thing that I
wanted to highlight which I spoke about
in the previous section is this part
here. So we need uh an API key and a
model to build our chat client. But
ColdCloud has a product that you can use
by the name of code key. And this here
is a playground and once you enter the
playground you can sign up for like uh
you know 25 API credit which lasts for a
month. Here you can select any model. So
currently they have Grock, OpenAI,
Google Gemini Pro and Anthropics on it
for so you need to hook up a base URL
and alongside you get an API key and you
can use this API key. Um obviously you
have to specify the model. So for
example, if I were to do that here, I
need to add in base URL and I'm going to
copy this URL over here.
And this is something that I will be
doing. I have already my model uh stated
within my environment uh file and I
already have my API key there as well.
So the rest is pretty good. So this is
how you can structure a very basic agent
but there's a lot of work to be done.
Let's move to the next section to
actually improve our agent. So now we
know that our agent does not have you
know complete knowledge about Microsoft
agent framework and we need to figure
out how we can provide that knowledge to
it. Maybe it can be a web search or it
can be a detailed document about
Microsoft agentic framework. But in any
case, we need to provide that
information to our agent so that it can
figure out the right kind of answer. But
let's do this particular activity where
we you know take that limitation to
another level. So if you're building
this agent with me, try these two
prompts out and maybe I can do that for
you so that it can give you a better
context. So let's jump into the code
again. All right. So I'm back in my code
and I change the instruction and the
name again. So the instructions are
you're a helpful assistant that keep
tracks of my day and it's just an
assistant and here are these two
prompts. So let's go ahead and run it.
Uh the very first prompt is my name is
John and the next one is I prompt the
agent like what's my name? And I would
type in Python
basic agent pi.
So the very first reply is nice to meet
you John. how can I assist you today?
But the very next answer is like I don't
know your name yet. How would you like
me? How would you like to be addressed?
So this is a problem and this is a
problem that I wanted to address. So
that is why memory actually matters
because without memory every agent is a
dumb agent. Every interaction would be
fresh. But on the hing side, we need our
agents to remember our conversations,
remember our preferences, even our
personal information to a certain
degree. And this, you know, this as a
whole, that's what we call context. All
right. So, let's talk about when memory
is a bit too much. And this question
sort of resonates with the very first
question I asked during the start of
this section. And that's something like,
you know, uh, what kind of an agent do
you want to build? Do you want to build
an agent that is a bit you know chat
focused or a general purpose agent? So
if you're building a general purpose
agent which doesn't require a lot of
knowledge or you know a lot of
interactivity with the outside world
then it's pretty linear. It's more like
a you know uh the a linear chatbot which
is stateless and answers you know just
specific questions like the weather in a
particular country or a city you know uh
tell me about the culture for a
particular country we can post such
question to such a general purpose agent
and that's where memory is a bit too
much you don't need it because there's
no state to be saved so hence we really
need to discriminate between u you know
when to use memory and when not to use
memory, what kind of nent we're building
and what kind of nent would actually use
uh memory or tools. So yeah, choose
wisely.
All right, it's now time to access the
first lab challenge. Use the link in the
description below to enroll in the labs.
And these labs come free of cost as a
part of this video. All you need to do
is create a free account and click
enroll. Once you're in, you'll see all
the labs listed on the left here. So,
you can follow along with me. Let's
start your very first lab. And your
mission, if you choose to accept it, is
to build your very first AI agent. What
you see on the left is more like an
instruction panel. You would see some
summaries about the content that you've
covered, some newer concepts, but mostly
instructions about your tasks. What you
see on the right is a whole VS Code
setup where, you know, you get the whole
thing. and you get the whole experience
and this is where you'll actually code.
So here we have a series of tasks listed
right here and I'll walk you through
each one of them as we go. So yeah,
let's get started with the very first
one. Okay, so most of the stuff you see
here is you've already covered it. So
it's kind of a good refresher if you
walk through it and once you're done
click okay.
Okay. So environment setup. So if you
remember I talked about you know
sandboxing your project. So in Python
it's a good practice that you sandbox
your project uh create a virtual
environment operate within that virtual
environment install all your pip
packages and requirements as you go and
you know uh execute within that
environment. So this section is all
about that. And this particular line
that you see which says run this first.
We'll copy this. We'll paste it
right here and enter.
So as you can see uh success environment
setup completed. So this particular uh
command actually ran this file. So this
file has a couple of checks like check
Python version check virtual
environments and we've got a series of
method which were run. So all of them
were to ensure that you have everything
in place to complete these four tasks.
So once you have covered uh this
particular section and you get this
message, scroll down and hit check.
Perfect. So we're good to go. Okay. So
this again is a refresher. So whenever
you're building an agent, the very first
thing you would do is create a chat
client which would have your model
identification, your API key, your base
URL, and then you hook that chat client
within your agent. And for an agent, all
you need is a name and instructions. And
that's that's it. All you got to do is
uh you know, hit up agent.run and you
know, interact with your agent. So hit
okay.
Okay. So let's get our hands dirty. This
is your very first task. And as you can
see with every task, let me clear this
up and let's push this here. Perfect. So
within this whole code, you have a
couple of to-dos. Like here we have
to-do number one. And you have to
replace some of the stuff that is listed
in the to-dos. For example, in this
case, it was a name. And here we have
instruction. So, let's copy this.
Backspace. There we go. So, these were
to-dos for line number 33 and 34. So, we
have covered these. We have got another
one on line number 47.
So, let's copy this. And here we go.
Perfect. So, we're good to go. All you
got to do is copy this and bring back
our terminal. Let's clear this up, paste
and run. Okay, so task one is complete.
Perfect. Let's check this and move
ahead.
Okay, so up till now we have been using
agent.run. But that is one way of
executing an agent. But what if you need
streaming? So what if you need for
example if you're using an agent which
sends a huge response a lot of tokens so
you got to wait for it that increases
latency for your agent. So to lower that
it's a it's a good practice to use
runstream instead. So but runstream
actually returns you chunks. So uh you
have to gather those chunks you have to
display those chunks accordingly. And
this is what this task is all about. So,
I'm going to hit okay and move on to my
next task, which is about streaming. Yet
again, I have a couple of to-dos like
line number 42. So, let's go to line
number 42
here. As you can see, uh instead of
using agent run, I'm going to use agent
runcreen. And line number 48, I'm going
to use chunk.ext. text
right here.
So, as you can see, this stream
initializes, you know, uh the stream
itself, but every stream returns some
bits of chunks, which we're checking if
chunk.ext exist. You got to print
that here.
And I think we're done. This was line
number 50. Perfect. I'm going to copy
this. Let's come over here. Clear this
whole thing
and paste.
Yeah. So, this is streaming in action.
Beautiful.
As you can see, this is a non-streaming
response and it's taking a while. So,
the whole thing here
took a bit of its time, right? So that's
why whenever you're dealing with
latency, it's kind of important to make
sure that whether you need a streaming
response or a non-streaming response. By
the way, the task two is complete. Let's
move on next. I'm going to check this
and I'm going to hit next. All right. So
next section is about, you know,
customizing agent behavior through
instructions. So within instructions,
you can play around with the tone of an
agent. So it can be formal, it can be
casual, it can be friendly, it can be
technical. The same thing with style,
expertise and behavior. So here we have
a couple of examples. Once you go
through it, you would get a gist of the
whole thing. So once you've read them,
hit okay
and let's move next. So in our third
task, we yet again have a few to-dos. So
line number 44 and 45. So let's scroll
all the way down. So here we are
building a technical agent and we have a
set of
instructions.
Okay, so the instructions are your
technical support specialist for Tech
Corp. use precise technical language,
ask diagnostic questions and provide
step-by-step step-by-step solutions.
format response with numbered steps. So
that's one way of you know uh playing
around with the tone. So uh let's move
on to 59 to 60. So here we have a
medical advisor.
So let's copy the whole thing.
Perfect.
Okay. So, you're a medical information
assistant for Techorp Health. Provide
accurate health information. Always
include disclaimers
to consult healthcare providers and use
medical terminologies
to lay explanations. So, that's another
way of doing that. And you'll find a
similar way. Yeah. So in todo three
compared the three agents responses
above how did the instruction change the
tone. So yeah I think we can go ahead
and run this as well. So let's bring
back our terminal let's clear the whole
thing paste and
okay so here we have a response and you
can judge you know the tone of each
agents accordingly. So with that your
task three is complete as well. So I'm
going to check this
and we're going to move ahead. So in
this section we'll be talking about
permanent and temporary instruction. So
permanent instructions is something that
we have already done. So it's something
that you provide with the name pretty
straightforward but additional
information is more like per request
context. It's more like steering your
agents behavior on the go while it's
executing. So it's some of the common
use cases is audience adaption, you
know, length control and we also have a
certain example for you. So let's go
ahead and click okay and let's get
started with our last task. So task
number four, let's get rid of this as
well. And let's get rid of this as well.
So, line number 67, answer in exactly
one sentence. Okay, let's go and find
line number 67 to do uh we're going to
fill this out here.
So, as you can see, we built an
instruction and uh this was our agent.
Uh we just ran our agent with a
particular query and uh this was our
additional instruction
with experiment two. We are running the
agent again and we are changing our
additional instructions. So it was
different over here. It is different
over here. Okay. So let's move on to
line number 77.
That's right here.
And here we have yet again a different
additional
instruction.
Perfect. So now we move on to line
number 93 to 96. So again reflection
questions. Compare experiment one
audience adaptation versus experiment 2
length control. And how does additional
information differ from permanent
instructions? So with this let's get
back our terminal. Let's clear the
response and hit run. So as the response
is being generated, it would be great if
you just go through it and just reflect
for a bit and try to find the difference
and you'll notice the actual advantage
of using additional instructions for
specific queries and how they both
actually operate together and how you
can find really you know that sweet spot
where you can use instructions and
additional instructions together.
Perfect. So that completes your section
number four. Hit check
and we move next. And with that,
congratulations. You have mastered the
math fundamentals.
All right people. So in this section,
we'll enhance our agent in terms of
memory and context management. So we'll
start with something like a simple
explanation for memory and context
specifically in terms of math agents. So
we'll run you through two scenarios and
the very first one would be session
level memory which is also known as
short-term memory versus long-term
memory. So we'll see a couple of
scenarios where both of them actually
fit in. U we have been making a travel
agent a very lean one a very simple one.
So in this section we'll be adding a
layer of memory to it and we'll see how
we can do that in terms of you know
short-term and long-term both. Uh we'll
talk about threads, how they really help
you preserve some conversational
context. And lastly, we'll see how tools
can really give superpower to your
existing agents. So imagine you're
talking to a friend. They know you, you
know them, you guys share some common
jokes and both of you know the context
of those jokes, the backgrounds of those
jokes. Uh you remember some of the
previous chats that you have. So, how
can we introduce the same level of depth
with an agent? You know, that's where
memory and context comes in. We need
something that can store the relevant
past interaction. And most of these
interactions are just textual. So, we're
talking about, you know, saving these
texts somewhere and injecting them
within, you know, our upcoming prompts
or future prompts so that an agent can
answer us in a better way. Let's talk
about session level memory. So math
offers some like a thread which you know
introduces the state within an agent. So
the diagram here shows that one agent
can have many threads. So it can have it
can preserve all of those different
conversations separately and separately
where each of their context is preserved
where each of their text is preserved
the historical you know conversations
are preserved. So that's the target here
and eventually when you store the
previous conversation they would
obviously make sense when you're
prompting a specific thread. So like one
thread can be you know a trip to Paris
and you drop in all kind of details
within that thread. So that's what we're
trying to achieve. So the simplest way
to do that is you know uh is the get new
thread method that the agent offers. And
all you have to do differently is
whenever you're asking a question, you
reference this thread at the very end
and the agent would take care of the
rest. So let's jump into into the code
and let's see how we can achieve it uh
with math. All right people. So I'm back
in my VS code and I made a few changes
to it. So I'll just quickly uh run them
by you. So this is my travel bot. The
very first thing we did is you know
create a chat client which is just
something that you know references our
uh OpenAI model. I'm using GPD40 mini
and we hook it up into a chat agent uh
which happens to be our travel
assistant. So I need to be a bit verbose
here. Um so we call it travel bot. Here
we create a new thread and every single
time I ask a question I pass this thread
within the method of within the run
method and here I'm just fetching stuff
up. So here I say I live in Paris and I
expect this to be saved within this
thread. And so the next time I ask in
which city I live, the thread happens to
have all the context and it would reply
in terms of that context. So let's run
this real quick and see. So I'll just
type in Python
basic chat
agent memory.py.
So let's see.
Okay, the first reply is in that's
wonderful. Paris is such a beautiful
city with amazing culture, history and
cuisine. And the second reply here, you
mentioned that you live in Paris. See
the change. So this is the session level
memory. Now we don't have to start all
over again. So we have some element that
would remember what we have been talking
about. So this is where threads are
really handy and really help you in
preserving the conversations. You can
even store these threads somewhere, you
know, something like Reddis, something
like, you know, a database even. Or if
you just don't want to be, you know, uh,
that fancy, you can just save it in a
file and retrieve it later on and add
the context within the chat itself.
All right, let's talk about the
long-term memory. So in maths term, it's
called context provider. So it refers to
the information that your agent store or
persists across multiple threads. So an
agent can now have a thousand threads
and your information is scattered in
these thousand threads but we don't have
a central body which would keep the most
important aspects of those conversations
that you had you know somewhere. So we
need that central body. So in this case
that central body is the context
provider. So we might need to save
something like your your preferences
stable facts about you and your
environment. uh just like we followed
the last example about you know
traveling to Paris and knowing that you
live in Paris and you're traveling to
London. So it's more like you know
having summaries of past interactions
stored somewhere so that your agent can
refer to those and answer you in a
better manner. All right. So within math
you implement a custom context provider.
So the subasses are listed here. It's AI
context provider and context provider
itself. So they are responsible for the
long-term memory. So every context
provider offers you two methods that you
can override. So one is before
invocation and one is after invocation.
So it's like when agent receives a
prompt, the kind of stuff it does before
you know processing the prompt and the
kind of stuff you can do after you
receive a prompt. So it's like you know
u you know injecting context derived
from persistent memory or the threads as
we discussed like user's preferred
language user's current location and
when you talk about after invocation you
inspect you know the messages from the
user or the agent to fetch out
meaningful stuff in it and when you
fetch those stuff out you can store it
somewhere.
So that's how you can do it. You all you
need to do is define your memory
provider. So in this case uh I'm keeping
the track of user's favorite color and
we'll see this example in detail but I
just want to show you how simply we can
do that. So while defining the agent we
might need to state our context provider
you know uh when we initialize our chat
agents. So let's get into the code. So
I'm back in my code and I changed a few
things. So let me walk you guys through
it. Uh this is pretty much the same uh
as it was in the previous section. We
create a chat client. Uh we hook up our
API key and model ID. Uh the new thing I
did here is I created a memory provider.
So I'll walk you through this class in a
bit. Um so now we use the async syntax
to actually generate the chat agent. So
within the chat agent, we hook up our
chat client. We have changed the
instruction. So it's like your friendly
assistant. ask the user about their
favorite color if you don't know it yet.
So, pretty simple, it's straightforward.
We hook up our memory provider, uh,
which is more like a black box right
now, but we'll talk in a bit. Uh, so
when I say async with chat agent, it's
like I want to run this code
asynchronously
and I need to wrap this code alongside
with building chat agent. So, we create
a thread here. uh it's something you
know that we just discussed. This really
helps us give that session level memory.
And next I have a few questions uh for
my agent. So we hook up our thread here
and the question is this is just a print
statement just so that I can present to
you uh present it to you in a nice way.
So uh the very first prompt is hello how
are you? We get our response over here.
Um the second prompt is I intentionally
tell the agent that you know my favorite
color is blue. So I expect my agent to
store uh you know the favorite color
here you know into my context provider.
So uh I print out my response here and
just to cross check my third prompt to
the agent is do you remember my favorite
color? So I print it out here again. And
lastly, uh this is where my favorite
color is saved within the memory
provider. There is an object memory and
I save my favorite color over there and
I'm just printing it just for the
reference. Uh okay, now let's get into
our favorite color class. So in order to
store your favorite color, obviously you
need a model. So I created a user memory
model which is being extended by pantic
space class. uh it's assigned to none or
null you know because it's Python. Uh
here we create our custom context
provider. So this is the color uh
favorite color memory class that you see
over here and you see chat line being
passed here. So this was our chat line
and we're receiving the chat line and
we're just you know sort of hooking it
up within our you know object. Um the
next thing we do within the constructor
is we create a memory or if we have
existing memory so we'll just hook it
up. But since we're not providing it any
um uh this particular piece of code
would create a new user memory which
would be favorite color but with a value
of none. So I explained it to you um you
know uh whenever you extend through
context provider you get these two
methods which you need to override. So
let's talk about invoking first. So
before the agent response provide the
context about what we know. So we'll
just check that if you know this thing
is null or not. So if it's not it holds
a value. So we'll print that value over
here. But in case it's not we'll just
you know um add an instruction which is
an output message that you do not have
you know user's favorite color yet. So
once that is done uh we can also
override the invoked method. So this is
what happens afterwards. So we have to
check if user is saying something um
which might change the existing value of
the favorite color. So it's like
inspecting uh users message. So here's a
demonstration. Um we'll pass in uh you
know the request messages which are like
uh users messages uh for this particular
instance. We'll check the role uh like
if it's a user and if user says
something like this like my favorite
color is uh if user prompts something
like this we would want to you know
detect what that value is and we and
this here sort of breaks the message
down and you know fetches the value uh
that is being stated by the user. So
once we have fetched it we'll just you
know sort of capitalize it and update
the memory. So that's pretty much it. So
these two steps are kind of important.
Let's see it in action.
So I'll just type in Python. Uh forgot
name of the file. So
this is cd um context.
Whoa.
CD context. And next I need to do Yeah,
that's right.
So hello, how are you? This shows you
know uh that the favorite color memory.
Okay, let it generate first and then we
can I can walk you through the output.
Perfect. So the very first thing we did
is whenever the user prompted as you can
see the invoking and invoke method was
you know run and um this last line is
from the invoke method because we are
like
u inspecting the message for the user.
So whenever the user uh drops in a new
message both of these uh you know
methods would run but again the sequence
of these methods are different. So the
invoking method happens to be the first
one invoked is the last one. So, um, so
the assistant replies, "Hello, I'm doing
well. Thank you for asking. How about
you? By the way, what's your favorite
color?" The user asks, uh, sorry, the
user states that my favorite color is
blue. Again, invoking was uh, you know,
invoking was called, invoke was called,
and now when, uh, you know, uh, invoke
was called, we now have a value because
the user used, uh, you know, these
pattern of words.
So that's why you see that you know
detected user sharing the favorite
color. So updated favorite color to blue
and the assistant responds like blue is
a lovely color. Again the user asks do
you remember my favorite color again
these methods were uh you know uh
invoked and while the invoking was
called now we have a value. So if you
remember there was a check in there that
if the memory favorite color has a value
then you tell the user or you just print
out that the user's favorite color is u
the value which we stored in the memory.
So pretty simplistic and now uh we
retrieve the favorite color and uh yes
your favorite color is blue. It's such a
nice choice. So that's how you can sort
of use a combination of invoking and
invoked to store the context
permanently. Cool. Let's circle things
back to the travel agent, but this time
with memory. And I want you guys to
appreciate the fact that adding memory
really changes things dramatically and
how it paves the way for tool
integration. So let's say you prompt an
agent like I'm going to Paris next week.
the agent would think hm and it would
extract all the right variables and
store it within its context or memory.
So the relevant variables here are
obviously the destination Paris and date
set to next week. Now if you prompt the
agent like hey find me a hotel
the agent would be able to use you know
some kind of a tool against these values
against the values that is stored in the
memory. So maybe it can use a web search
tool. So that's how you know uh tool
integrations come into play. So the
agent used web search to search flight
or hotel in this particular case to
align with user preferences. So we
needed user preferences which were not a
part of its you know uh training data or
not a part of its you know internal
context. So that had to come from the
outside. Hence we stored it in the
memory. Now the web search tool is here
which it uses to act and communicate
with the outside world. But here's a
problem. It would bring all the kind of
list uh you know of hotels that exist in
Paris for next week. What if we just
need one hotel? What if we just need to
communicate about booking and you know
finding the right kind of suite in one
hotel? That's how uh you know tool
integration matters because we might
need to use a tool which refers to an
API for a specific hotel. So that's how
it comes into the play. So without tools
agent gives generic answers with tools
it acts by you know calling a booking
API calendar API you name it any API. So
math really offers you know uh the
combination of tools and memory in a way
where you can experience automation and
personalization at the same time. So
that is the cool factor. All right
people. So welcome to your second lab
and your mission this time is to master
memory and context management. Obviously
you would want your agents to remember
conversations but obviously agents can
do so much. So you get to manage how you
can smartly handle the contextual part
and provide personalized experience
through your agents. So this time we
have four progressives tasks for you. So
you guys know the drill. Let's hit okay.
All right. So this is yet again a
refresher. We'll start with what is
conversation memory and how your agents
can actually remember previous messages
and that is done through threads which
we'll learn in this lab and implement it
ourselves. So what is the difference
between stateless and stateful agents?
So the word state is at play here. And
to inculcate a state within an agent,
obviously you need some form of an
element of a memory. And then since
obviously we're using LLMs in agents. So
context windows and token limits are a
huge you know difference creator
whenever you're building agents and
whenever you're using agents. So it's
not like you can ignore both of them.
Then we have you know different memory
management strategies and we'll apply
all three of them within our labs. So
let's go ahead and click okay. All
right. So if you have gone through lab
one obviously you're very well
accustomed of what the environment setup
is. So I'm just going to copy this and
paste.
And yeah our environment setup is
successful. Let's click check and okay.
So this is task one and it's about how
we can implement multi-turn
conversations. So this is basically done
through threads and here we have a
sample for you of the thread pattern and
how you can create a thread how you can
attach a thread within you know um an
existing execution of an agent. And
here's another example and yet again
another example. So just a few um you
know summary points for you. So once
you're done going through them, click
okay. All right. So let's begin our very
first task. So let me walk you through
it. We create our chat client. We create
our chat agent.
And then to do number one, keep the
conversation state. So here uh what
we're trying to do is we're creating an
agent thread. So this is more like a
session long memory and this is our
object. So whenever now we will converse
with an agent, we will hook this
particular object into that method.
Okay. So the very first turn is customer
introduction. We have one message. Hi,
my name is Sarah and I'm having trouble
with my account. So here as you can see
we have hooked chat history over here.
So to-do number two uh again uh we are
trying with the new message and we'll be
hooking up our chat history over here.
And then to do number three, execute the
agent with the current user message and
its associated conversation state. So we
have yet another message. What was the
issue again? So for this we'll copy
message three and
uh this goes here and we need the whole
thing thread is equal to chat history.
So comma and chat thread. Perfect. So we
had I think we're done. Yeah. So what we
can do next is copy this. Bring our
terminal. Let's
make some space and hit run.
Perfect. So, our task one is complete.
We can now move on. Let's clear the
terminal.
Okay. Click check and next.
Okay. Task number two. And this is going
to be a bit tricky and a bit longer
because we're dealing with conversations
that got too long. So when we're talking
about LLMs, we talk about its associated
context window which is a limiting
factor. So you cannot dump all the data
in the world and expect it to act
consistently. So you have to manage that
data. You have to manage that context.
So here we will be talking about a few
strategies. The very first one is
sliding window which is messagebased
which means that you save a particular
amount of messages and recent messages
and you let go of you know the previous
ones or slightly older ones. Then we
have token based which is you know you
uh keep messages that fit a particular
limit of tokens. So here we have some
you know uh working remedies uh that you
can apply readily. So here you access
your messages but you can remove your
messages like these you know this just
means keep last 10 which which are like
the recent ones. So this is sliding
window at play. So let's get ahead into
the code. Uh let's hit okay. Okay. So we
have got a couple of to-dos. Uh we'll
start with line 37. Let me explain
things as we go. So pretty simple stuff.
We have child client here. We have the
agent here. Then on line number 37 we
have
so we've talked about it in the previous
you know section we will be creating
agent new thread. So this is for our uh
session level memory and next we move on
to line number 82. Okay so let's start
with to-do two. So implement sliding
window truncation
and as we go down we're creating a new
thread. We are creating a object an
array of object or probably this is just
going to be array of strings I believe.
So uh I'm filling it up with a
particular message. So it is array of
string. All right. So to-do uh extract
only last five messages. This is
something that we have done in our first
to-do. And
um then we go ahead and run the agent.
So you'll see a couple of outputs over
here. And now we ask the same question
again. And here we have all our print
statements. Perfect. So create another
conversation with varying message
limits. So here uh I think we are Yeah.
So we have a short message then we have
you know slightly longer messages and
you can go through and read through each
one of them. Uh we're updating our
message count over here uh total
exchanges right here. Then as we
slightly move below so we have an
implementation of token budgeting.
So our total budget which is a part of
to-do 3. Um so we're done here. We're
done here. And this is to-do three where
token budget or token limit is 1,000.
So, u line number 141 is pretty simple,
I believe, because all we have to do is
impose this check right here. So, with
that done, we can go on to our line
number 158 where messages in the budget,
which is count messages kept within the
budget, is a play. So 151 again uh
messages in the budget.
So we were updating everything you know
u managing like how many tokens a
particular message has and if it's
within our limit we're going to save
those messages and uh we keep count and
once we're done keeping a count we can
place it over here. Then we can go ahead
and run the whole thing because this
would show you know the implementation
of how sliding window is working and how
the tokenbased implementation is
working. Since our tasks are complete,
we can go ahead bring back our terminal
and hit okay.
Okay. So here are the details about our
two experiments that we conducted. So
the very first one simulating a long
conversation conversation complete we
had 15 exchanges. So this is where we
you know uh keep the last five exchanges
and we hook that up within our agent and
our agent responds accordingly. So this
is a test one with full conversation and
then we do our you know simulated
truncated history. So that's one
strategy. Then we talk about token agent
simulation and uh we as we said 1,000
token budget uh can keep last eight
messages using approximately uh 88
tokens. So that's everything. Our task
two is complete. Let's click check and
move to next. Okay. And let's clear
everything here. So the next couple of
tasks are a bit verbose and there are a
bit longer as well. So we're going to
take our time to walk through them and
probably talk about it as we go. So in
task two, you learn about truncating
strategies and how you can smartly
manage your context, but both have the
same limitation. Truncation loses
information. Obviously, you're keeping
the last five exchanges or
conversations. But what if you had like
the most important exchanges, you know,
at the start of the thread or at the
beginning? Maybe all your preferences,
your name and your details are hooked up
right there. So, that's a problem. And
the solution is summarization. So, we
talked about context engineering. So
instead of you you know uh just choosing
a particular bit of your conversation
it's better to summarize the older
conversations into maybe two or three
sentences or maybe just identify you
know important parts about those
conversation and saving them in a very
lean and a very you know effective
manner. So this is uh this section is
all about that. You can probably go
through it. We have a couple of steps
here for you. So we'll be implementing a
summarizer agent specifically just for
this task. So let's go ahead and do
that. Okay. So we have a couple of
to-dos line 32 and 33. So let's go to
line 32. Yet again we're building a
summarizer agent. I think this is
something that we have done in lab one
as well. Just hooking up the name and
instructions. Pretty simple stuff.
Perfect.
Then we have line 109. Use summarizer
agent to run summary prompt. So here we
have the conversation topics and this is
our conversation log
and I think we're just printing all of
them depending on the conversation
and with some truncation at play. So
once we're done with that, let's move on
here. 109.
There we go. So we use our summarizer
object, which is our agent. And
we place our summary prompt. Okay. So
our next to-do is going to be on line
135.
And we are on line 135 right here.
Okay. So this goes here. So this is
again this is our actual summary
message. So this is the summary that we
have created somewhere over here. Again
as you can see this is the response of
our agent. So yeah, so we're hooking
this up and we are going to use this
summary message
as a query with our new thread which we
created right here. Perfect. And we're
using this thread below just to keep our
context. And this this particular thread
has our lean context. So I'm going to
copy this. I'm going to bring back my
terminal. Let's clear the whole thing.
Okay. So our task three is complete.
Let's go ahead and check and move to the
next. Okay. So in this task we'll be
talking about personalized context. So
in the previous labs we talked about how
we can maintain conversation history. Uh
we can you know maintain our message
limits and how we can compress our
context. And in this section we'll be
talking about how we can personalize the
whole thing. So it's more about building
user profiles. And you can go through
these sections. It's pretty uh pretty
self-explanatory.
And yeah, let's go ahead and hit okay.
Okay. So, let's clear this here. So, we
have got a couple of to-dos. Let's walk
through the lab. Our chat client, our
chat agent. So, you're a helpful agent
that provides personalized
recommendation. So, before our agent can
actually provide you recommendation, it
should know your preferences. So that's
what personalized context means where we
have to fetch certain parts of the data
and add it into you know more of a
personalized context. So here we have a
couple of messages. So as you can see u
my name is Jamie. I'm an intermediate
Python developer. I'm really interested
in automation in AI. I want to learn how
to build intelligent agent. Everything
about these messages they reflect, you
know, uh personalized preferences by a
particular user. So you walk through
these messages. Uh we're going to print
them here and conversation history
built. Perfect. Okay. So line number 96.
So our very first to-do is extract user
preferences from the conversation. So we
had these conversations and right here
now we have this prompt which takes all
these conversations and extract and
return only JSON object with these exact
values. Perfect. So line number 96 we're
going to hook up our agent right here.
Uh this is pretty simple. And extraction
prompt by the way extraction prompt is
this whole thing. the prompt that
extracts everything out of the uh you
know the conversations above. So once
done we move on to line 148 to 149. So
this here is about create generic thread
and run test query baseline without
personalization. So nothing fancy just
simple stuff here. So we get agent.get
new thread. Uh let's copy the whole
thing.
And let's pass test query and generate
it. So this is
our test query. This goes here and our
generic thread goes here.
So let's move on to line 177. Inject
personalized messages into thread. uh
177 right here. So now we have
personalized messages. So this prompt
goes here and our personalized thread
this goes here.
Uh yeah, I think we're done with
everything. Let's copy bring up our
terminal paste and let's wait.
Okay, so our task for is complete. Maybe
you can go through all of the output and
maybe reflect on a few questions in
terms of you know how they differ from
each other, how responses are, you know,
different from each other. So with this
done, I'm going to check it and move to
the next section. And well,
congratulations. You have now mastered
memory and context. So this section is
about function tools as AI agents. And
uh we'll be talking about what tools
are, how they can be depicted by
functions and how you can leverage or
you know call thirdparty APIs within
these functions. We'll be building a
tool step by step and we'll also be
talking when you know tools are too much
or when not to use tools. And by the end
of this section we'll briefly touch upon
a multi- aent workflows and task
delegation among you know when you have
multiple agents or an agentic system.
So, I'm pretty sure you might have used
a voice agent up to this point, but what
really would have been cool when you
have a voice agent and it can extract,
you know, the details out of your input
query and maybe, you know, send an email
on your behalf, book a cab or even
schedule a meeting just like, you know,
Jarvis and Iron Man. So a tool for an
agent is a function API or an external
service that the agent can call to
perform an action or fetch data not just
to generate you know text which is more
like u a limitation of LLM we know that
they're limited in terms of their
training knowledge. So just like memory
tool is just another add-on to enhance
the capabilities for your um you know
your agents or you know uh your simple
chat bots.
So I'm pretty sure you must have used a
voice agent up till now. But what really
would have been cool when your voice
agent can really take your input query
and do stuff on your behalf like you
know maybe uh book yourself a ride, send
emails on your behalf or schedule a
meeting. So we know that the knowledge
of LLM is limited. Memory helps to
expand it and tools is just another
capability where we can enhance the
internal functionality of an LLM um or
an agent specifically in this particular
scenario. So, a tool for an agent is a
function, API, or external service that
the agent can call to perform an action
or fetch data, not just to generate
text. So, imagine you're building a trip
planner or we're just, you know, uh
expanding the horizons for our uh travel
bot that we have been building since,
you know, section two. So if I prompt
the agent uh book me a flight to Tokyo
next Monday, I expect my agent to
extract you know uh meaningful details
out of my message which is I want to set
the destination to Tokyo and date to
next Monday. But now with the uh
introduction to tools my agent would be
able to call specific functions and pass
these fetch variables as parameters to
that function. So that is basically
called function calling or you know
tools for an agent. Uh the agent would
take its time and maybe uh the make
flight booking would return a value and
once it does here we go our flight is
booked and here's the confirmation uh
with a dummy number 1 2 3 4 5. So this
is the kind of experience we can build
by uh you know uh introducing tools to a
particular agent. So now it has a
superpower to actually go and interact
with the outside world and you know make
decision based upon you. So there are
two steps in which you know uh you can
add your AI function. So AI function is
available as an annotation in math and
you know it takes nothing but name and
description. You just have to name your
tool, you know, add in a verbose
description and against this annotation,
what you really need to do, you need a
function, just a simple function with as
many parameters as you can, you know,
probably provide. So in this case, uh
this is dummy uh method which you know
uh takes in a location and you know
tells you the weather but it's very
static and it's very you know hardcoded.
So we'll take this example and we'll
expand it and introduce this get weather
uh tool to our trip planner so that if
if you really want to visit a particular
city or a country you can really you
know uh query your uh agent in terms of
you know what's the weather like over
there. Step two would be you know uh
taking that AI function and hooking it
up with the chat agent. So you get a
tools attribute and you can uh sort of
provide your tools in terms of you know
an array or list of tool functions. So
let's get into the code and let's build
this stuff. Okay. So I'm in the VS code
with the very same example of my travel
agent. Again I'm I've tweaked a few
things. So the very first thing as you
can see the AI function and uh the get
weather method. So I've added a few
details to it. I'll come back to it in a
while. So, the rest of stuff is pretty
much the same. Um, I'm creating my chat
client here. I'm hooking it up with my
chat agent. Uh, here's my list of tools.
And, uh, this is how I prompt my agent.
And I'm asking, what is the weather in
Tokyo today? And I'm extracting the
reply from my agent. All right. So, now
let's see what I have in here. So, this
is a functional method. I'm basically
taking in a location and uh these are
just a bit of you know uh annotation
field uh which just tells me you know
the location is uh the location is
something that I need to get the weather
for. So I'm using geocoding API for my
weather search. The very first thing so
there are steps in which how how I'm
doing it. I'm preparing uh you know
parameters where I just add in location.
First um I need to call a particular API
to fetch latitude and longitude for this
location. So this is how I'm doing it
and I'm just filtering my response over
here. And if I get the right response,
I'm allocating it to a variable latitude
and longitude. Next, I'm using latitude
and longitude to fetch the actual
weather uh for for my location. And for
that I need to call another API. And I'm
preparing my parameter function uh sorry
parameter object over here. And then I
call uh make this request call. And
again I fetch some data out. And the
rest of the stuff I'm doing here is all
about composing the right kind of
response. So in this case uh the city or
country would be the location uh lang
latitude and longitude which I fetched
above and uh from this particular call
the one I made right here I get
temperature uh wind speed and time and I
just result the response uh when the LLM
fetches this response or you know is
provided with this response it goes
ahead and you know formats the whole
thing for us. So let's run it. So this
is Python basic chat
agent tools.py.
There we go. So the agent replies,
"Today in Tokyo, the weather is as
follows. Temperature 13.8° C and wet
speed is 4 4.1 km per hours." Okay,
let's go ahead and change this. Uh let's
talk about uh Paris
and let's rerun the whole thing.
All right. So Paris is as follows. 10.9°
4.6 km per hour. So now you can see we
have something functional that is you
know that goes beyond its internal
training knowledge and it's interacting
with the world outside and we have a
pretty cool feature here. All right. So
now we have a very cool agent which is
updated with the outside world. It
fetches us the latest knowledge for you
know wind condition or you know uh the
temperature in a particular city or
country even. But now imagine instead of
this one agent you had multiple agents
to handle your you know logistics around
your trip. So agent A could be uh you
know your travel planner agent which
would just draft a highlevel tentative
plan uh given your dates and uh given
your preferences. Uh you could build
another dedicated agent which would just
go ahead and book flights for you and
another agent would which would you know
find the right kind of hotels for you.
So agent A would draft a plan and pass
it to B and C to do their respective
jobs. So this is called you know
multitask delegation or orchestration in
some way. Let's finish up this section
by talking about when not to use tools
or when tools can be an overkill. So not
all agents would need full
orchestration, right? So maybe sometimes
you just want to build one agent which
is a simple Q&A bot. Um it might just
need you know some memory and some
knowledge about you know your particular
topic or your organization or your
company and uh you can build utility
functions uh you know like convert units
and you know some respective small
functions inside those agents. Um and
you know privacy tasks first if no
external call is needed. Um so yeah that
that happens a lot of time where your
agent is just more like you know a rag
agent. So in those scenarios skipping
tools and multi- aents keep things
simple and faster to build. So yeah if
you're building out an agent uh again uh
the very important question is what kind
of an agent do you need? Uh you have to
make sure that you're not you know
stacking up your agent with unnecessary
uh you know complexity in terms of
memory and tools.
All right people, welcome to your third
lab. And your mission this time, if you
choose to accept it, is to master the
function tool itself. So just like the
previous labs, we have four progressive
tasks for you, which would actually run
you by the definition of function
calling. How you can implement a
simplest tool call through an agent.
We'll talk about tool handlers and error
handling and validation while you're
working with agent function tools. So
let's dive into it. Okay. So we have
some bit of a refresher over here. We
start with what function tools are. Uh
so if you're aware of function calling
or tool use, they actually mean the same
thing. And later we talk about you know
what agents are and why agents are
separate from you know conventional chat
bots. This is something that we have
already covered within our previous
videos. Uh then we talk about how
function tool work and what do they
require. So in terms of definition the
kind of schema they require we'll jump
into that in a bit and finally we will
be talking about you know some of the
real world use cases and we have talked
about a bunch of them before as well. So
for example in the e-commerce world you
have to search product you have to check
inventory you have to process orders. So
the LLM which is based around an agent
doesn't know anything about your
ecosystem of work. So you have to bring
your ecosystem of work to the agent by
exposing you know certain APIs which
they can leverage and the only way an
agent can leverage them is through tool
use. All right. So let's click okay and
environment setup. I think uh we have
gone through this a couple of times. So
I'm just going to run through it
quickly. So this just sets up my
environment. checks everything is okay
and everything is okay. So I'm going to
hit check. Okay. So task one simple tool
function. So this is more like a
definition of how you define a
particular tool. So you specify a type
which is function and within the
function you have to define these
attributes. The name description.
Description is very important because an
agent makes a choice of calling a tool
through these descriptions and that's
why yet again parameters are also
important because the agent would
specify those parameters within this
tool calling. Then we have you know uh
the agent declaration and that's how you
define a particular tool. So in this
example or the very first task we'll be
building a calculator tool. So let's get
started. Uh this is our task one. Let's
clear our let's make some room. Okay. So
the very first to do is define a
calculator tool function. So here we
are. This is a method and we have an
operation and as you can see uh we
actually carry on arithmetics based on
this operation. So if it's add, if it's
subtract, we'll subtract. Uh a and b are
you know the numbers or integers here.
So uh this is pretty simple right brings
back memories. So here we need to
specify the return type. So I'm going to
specify float here. And then for the
second to-do we have to create an agent
with a calculator tool. So the agent is
defined. We initialize our chat client
the name instruction and here we have to
specify our tool. So this will be the
calculator. Then we have a couple of
test queries and we run those test
queries. Let's go ahead and run them. Uh
we're done with both of them. So let's
bring back our terminal.
We forgot the housekeeping. Forgot to
clean the terminal. Here we go. Okay. So
everything is working fine. Let's clear
this and click check and we're good to
go. Okay. So task number two. Let's
start with it. So in this function,
we'll be creating learn how to implement
real world tools that call external API.
Perfect. Okay. So we have a weather
database Paris, London, Tokyo and we
have the respective temperature
conditions and humidity. So this is
pretty simple. So our very first to-do
is you know uh so we'll be defining
multiple methods I believe or not
multiple methods I think this one is
just about uh you know calling external
API so I believe this tool here get
weather mimics my mock API so this takes
city and get current weather information
for the city so um all the information
lies right here let's collapse it okay
All right. So we get our data through
our weather database which is again some
mock data and within our very first
to-do which is line number 34 right here
we have to specify a return type just
like we did in the last lab and line 43
we have to provide uh a formatted string
just like this. So this returns us
weather in city um and its respective
temperature condition and humidity.
Perfect. So now let's collapse this.
We're done with it. And finally here we
have to use the get weather tool. And
that's pretty much it. Let's copy. Bring
back our terminal.
Hit enter.
And task two is complete. Let's clear
this and check.
Perfect. Let's move on to the next lab.
Okay. So now we're talking about
multiple tool agents. All right. This is
going to be interesting. So in this
particular section, we'll learn how to
build agents with multiple uh
specialized tools working together. So
the very first tool is again it's more
like a mock tool. We'll be using search
web. And here we get a query and uh we
just get a mock response. So uh on line
number 25 which is right here again we
have to hook up our
uh return
type. Perfect. And line number 45 format
and return the email confirmation
message. So we have another method which
we're going to use as a tool. So this
gets to subject body and this is yet
again a mock method. And here we have to
specify a return type.
There we go.
And yeah, we're done with it. Let's
collapse it. And then we have a third
tool. So this is something that you're
already custom with. We just did one. So
we have an operation. We have two
integers and we carry out our operations
through it. So more like a calculate
method. Um line number 77 is next. It's
right here. And here as you can see to
use multiple tools you will be
specifying this array of tools here. So
yeah I think we're done. Uh we have some
test queries and as you can see uh all
test queries are different. So the very
first one is calculate 50 * 20. The
second one is search for Python
tutorials. Third one is send an email to
johnacample.com
about the meeting tomorrow. And then we
have some printing mechanism here for
you. Let's paste it and run. Oh, works
like a charm. And our task tree is
complete. Let's move on. I'll click
check and next.
All right. So we're finally on our final
task
and this is about learn how to build
robust production ready tool with proper
error handling. So up till now we've
seen tool but what we have missed is you
know the whole robustness how we can put
check we were expecting that the LLM
would put in a particular parameter
within the function but we haven't
checked those parameters so they can be
problematic your agents can hallucinate
they can mismatch the types and whatnot
so in this particular task we'll be
assuring that whatever we get we
properly check it as a parameter
So the very first line is line number
32. Again uh this is pretty much the
same as previous one. So it return
string and then on line number 43 which
happens to be here. Uh so yet again
let's walk through it. So this is query
user database and up here we have some
users. We have name, email and role but
we don't have the ID. But this
particular method actually works on ID.
So we first check that if whether the ID
passed here is an integer or not. That's
the first step of validation. The second
step is whether we have to hook this
check here. So let's place it over here.
So user ID is uh lesser or equal to zero
which means it's negative. So we have to
prompt an error like you know invalid
user ID must be a positive integer. Then
on line number 54
we have we have to return a particular
format. So let's copy this.
Here we go.
Perfect. And then finally on line number
70 we have to specify the tool which is
query user database. The rest of the
stuff again is lookup user one uh valid
user should succeed. We have a couple of
you can go through the test cases and
here we print those test cases. So just
going to copy this
and run.
And here we go. The task for is
complete. We have all of our information
here for you to walk through. Don't
forget to, you know, look at them, stop
for a while, reflect, and then move on.
Okay, so let's move on to the next
section. And wow, congratulations. You
guys have mastered the function tools.
🧪FREE Labs for Microsoft Agent Framework: https://kode.wiki/4i6Thku Master the Microsoft Agent Framework (MAF) in this comprehensive beginner-friendly tutorial. Build production-ready AI agents from scratch using Python and OpenAI's GPT models. This complete course includes hands-on labs, real-world projects, and everything you need to create intelligent AI agents. 🎯 What You'll Learn: ✅ Microsoft Agent Framework fundamentals ✅ Building AI agents vs traditional chatbots ✅ Memory and context management ✅ Function tools and external API integration ✅ Multi-agent workflows and orchestration ✅ Production-ready agent implementation ⏱️ Timestamps: 0:00 - Introduction to Microsoft Agent Framework 1:06 - Problems with Current AI Agents 3:21 - Introducing Microsoft Agent Framework 5:30 - MAF Agent Classes Overview 6:43 - Creating Virtual Environment in Python 9:11 - Building Your First Agent with MAF 14:37 - Chatbots vs AI Agents Explained 18:20 - Installation and Setup 22:30 - Setting Up Environment Variables 24:40 - Hello World Agent Demo 30:15 - Agents Without Memory Problem 33:45 - When Memory is Too Much 36:20 - Lab 1: Build Your First AI Agent 42:10 - Streaming Agent Responses 46:30 - Customizing Agent Instructions 51:40 - Lab 2: Agent Customization 56:50 - Memory and Context Management 1:01:15 - Session Level Memory (Threads) 1:06:30 - Long-term Memory Implementation 1:11:20 - Lab 3: Memory and Context 1:16:45 - Function Tools Introduction 1:24:50 - Conclusion 🔧 Hands-On Labs: This course includes interactive coding challenges that open in your browser. Get instant feedback and validate your code as you learn. 🧪FREE Labs for Microsoft Agent Framework: https://kode.wiki/4i6Thku 💻 Prerequisites: - Python 3.10 or higher - Basic Python knowledge (helpful but not required) - OpenAI API key or KodeKey account Perfect for beginners, Python developers, and anyone building LLM-powered applications. Follow along with free interactive labs and build a real-world travel assistant that remembers your preferences, checks live weather conditions, and plans your entire trip autonomously. 🔔 Subscribe for more AI tutorials, DevOps contents, and hands-on programming content! #MicrosoftAgentFramework #AIAgents #PythonTutorial #OpenAI #machinelearning #PythonProgramming #AITutorial #AgenticAI #LLM #ChatGPT #AIAgent #kodekloud