Loading video player...
Hey everyone, thanks for joining us for
the last session of our Python and AI
level up series on model context
protocol. My name is Anna. I'll be your
producer for this session. I'm an event
planner for reactor joining you from
Redmond Washington.
Before we start, I do have some quick
housekeeping.
Please take a moment to read our code of
conduct.
We seek to provide a respectful
environment for both our audience and
presenters.
While we absolutely encourage engagement
in the chat, we ask that you please be
mindful of your commentary. Remain
professional and on topic.
Keep an eye on that chat. We'll be
dropping helpful links and checking for
questions for our presenter and
moderator to answer.
The session is recorded. It will be
available to view on demand right here
on the Reactor channel.
With that, I'd love to turn it over to
our speaker for today. Pamela, thank you
so much for joining.
>> Hello. Hello everyone. Welcome to our
Python AI series. This is in fact the
last session in the series.
It's actually it's been a ton of fun.
I've really enjoyed it. Love seeing uh
so many people in the chat over and over
and also hate anyone who's watching
after the fact in the recording. This is
our final session in the series. Uh
we've had it's a nine-part series. All
of the videos are available on YouTube
with slides and code samples. So if you
missed anything, go back and watch it.
Uh I do recommend watching everything in
order because we did sequence them in a
particular way because the you know the
concepts do build on each other and uh
you know hopefully it gives you a great
foundation in how to use Python with
generative AI. Uh we you know we had a
lot of fun putting on this series and uh
we'd love to hear your ideas for what
other series we should put on that would
help you grow as uh you know AI
engineers and and developers. So just
put in the chat your ideas for what
topics you want to see in the future.
So for today our final topic is MCP
model context protocol. Uh this is big
topic in the last year only in the last
year but it is like just exploded in the
last year and uh and it's a really fun
topic and an important one and could
potentially be the future of the basis
of a lot of technologies. So today we're
going to be talking about what is MCP
and then building MCP servers in Python
and that you'll be able to follow along
the code of that and how we can use our
AI agents with MCP servers, both our own
and other people's MCP servers.
So if you do want to follow along with
the code, we have a repo for today. Uh I
think actually is that URL slightly
wrong? Let me see.
Oh, no, that's correct. Okay, that's the
right URL and we're going to put that in
the chat uh so that you can follow that
URL. There you go. So, you'll go to that
link and then that brings you to the
repo. And once you're on the repo, click
the code button and you'll see a local
option and a code spaces option. Uh you
can you can do local if you you know
want to set it up, but then you will
have to set up your environment
variables. So we usually recommend code
spaces where everything is set up for
you. So I'm going to click create code
space on main.
So that creates a GitHub code space in
the browser which is basically a VS code
running inside the browser that has the
code open for the repository and it has
the right version of you know Python
installed and the you know UV and so
everything you need in order to run the
demos for today. So I'll just let that
get set up in the background and go back
to the slides.
So first let's talk generally about MCP
and what it actually is.
So before we had MCP, if we wanted to
integrate a service like an API with our
generative AI application, we had to
individually figure out how we were
going to bring in that service. like how
how are the you know how is our LLM
going to be communicating with that
service right um you know you do it like
via uh tool calling or maybe via
structured outputs right but we had to
like figure it out right and also other
services like chatbt and cloud right
these are all generi applications GitHub
copilot they all had to figure out like
how they could would integrate with
other services right so we saw things
like plugins um I think in GitHub
copilot we called them like extension
participants, chat participants, right?
There was all these different ways
people were coming up with different
paradigms for connecting uh these you
know AI applications and chat assistants
to other services and APIs.
So that was getting a bit difficult and
basically what they realized is like
okay we need a common way to connect
these applications these AI applications
with external tools data sources and
abilities. So this was this protocol MCP
was created by Enthropic as a way to
standardize these interactions and so
they created originally for cloud uh the
cloud application cloud desktop. It's a
application you can run uh that's
similar to chat GBT where you know
you're using an LLM in order to um get
answers to your questions and you know
figure things out right so they created
MCP originally for uh cloud desktop but
it just took the world by storm so uh
they put it out there like luckily they
put it out there as a protocol and uh
and people realized like oh this is
something we need right we we this is
actually huge like If we can all agree
on MCP, then uh developers could write
MCP servers once and then have them be
usable by anything that's supported
interaction with an MCP server.
So that's you know why we came up with
MCP and why it's so helpful now that we
have MCP. MCP is the medium that these
interactions go through. Uh so now when
we have these applications you know like
claude and copilot uh even chatbt now
finally supports MCP if you want to
connect them with these other services
the standard way to connect them is to
go through MCP servers. So if somebody
if somebody has a service and wants to
make it easy for people to use them with
these applications they can choose to
expose an MCP server. And then also we
when we're developing our own AI
applications like when we're developing
our own agents we can also use MCP as a
way of connecting to these other
services right uh so for those of you
who were you know at the last two
sessions about tool calling and uh
agents and we saw how we could connect
agents to tools we can now connect
agents to an MCP server and the MCP
server exposes the tools right so we
have this standardized way of ex uh
getting tools from other servers
Now with MCB we get more than just
tools. We also get resources, prompts,
like all sorts of other things as well.
But tools is really where a lot of the
um you know the great functionality
comes from.
So this is the overall MCP architecture
and like kind of jargon that uh that you
need to know. Uh so in MCP we have MCP
servers, MCP clients and MCP hosts. Uh
so a server that's uh that's a thing
that actually exposes tools and prompts
and resources, right? So for example uh
there's a GitHub MCP server, there's a
documentation MCP server, there's a
notion MCP server, there's like a Figma
MCP server. So these are all servers
that are exposing their functionality
uh as as you know tools, prompts and
resources. And then we you know we want
to actually consume the tools from those
servers. So for that we have a uh MCP
host like so an MCP host would be
something like cloud copilot chat GBT.
It is it's an MCB host is an application
that's capable of connecting to MCB
servers. So within an MCP host uh that
host will use an MCP client in order to
open up a connection to an MCP server
and start consuming the tools and and
other things from that server. So that
is the the overall architecture um of of
MCP and it helps to know these these
terms uh because you know sometimes
you're going to be in different parts of
this diagram like sometimes you're going
to be writing MCB server sometimes
you're making them hosts and the the
clients uh and sometimes you're doing
all of it right and we're going to do a
little bit of everything today.
So here um you know in terms of MSV
clients there's actually a huge number
of clients. This is just a subset of the
list from the model context protocol
website and it just really shows you how
much support there is for MCP across the
you know the AI ecosystem. Uh how how
it's really gotten adopted so fast. Like
it's really exciting for anyone who
likes open standards because I this is
basically the first open standard that
we've had in the generative AI stat
space that's like actually an open
standard that's been like really really
adopted across the board, right? I think
the other big one we have is agents.md
uh for uh coding agents, right? Um so
it's it's you know it's exciting
whenever everyone can agree on stuff
because that makes stuff better for
developers, right? Now that everybody's
agreeing on MCP, then you know it means
that we can write our code just once and
have it work in a lot of different
environments. So as a developer, I love
it. Everyone's getting along and happy
and going along with MCP.
So uh we're going to use an example for
today for our MCP host. We're going to
use VS Code with GitHub Copilot. uh it
does have complete support for the MCP
specification. It stays very up to date
uh you know with changes in the spec. Uh
it is an active specification so there
are new things being added to the
specification all the time. Uh we do
have people from Microsoft that work on
that specification along with people
from Enthropic Pantic. uh there's you
know a bunch of companies that are
working on this specification together
uh in order to figure out what are the
what are the issues with it and what do
we need to improve right like I see
there's a question about authentication
in the chat authentication is a
particularly hard thing to figure out
and that is there is um you know in the
spec it does talk about how to do
authentication so you can you know you
can pass around keys and tokens u but
you can also do full ooth uh ooth
authentication right and so depending on
what server you're
you know there's different ways of doing
that off uh but they put a lot of
thought into the specification about how
does authentication work in this context
uh so for the with VS code I'll show you
how we can install it so here um here
I'm actually using my local VS code um
just because it's a little easier to
demo um the uh existing MCP servers here
so what you can do is go to your
extensions tab
And if you want to find just MCP
servers, the way to do it is to say at
MCP
and that will show all these MCP servers
you can install. So you can see like
Markdown, GitHub, Playright. Uh so
GitHub is the one that I use the most
often.
And Champs, I thought I already had it
installed. Let's see. I might have
installed it a different way. All right.
Yeah, you can install these like install
these servers multiple ways, but here
I'll install it from here just to show
uh and here. So, uh it says the MCP
server wants to authenticate to GitHub.
So, the GitHub MCP server is one that
uses the OOTH authentication by default.
So, if I click allow here, I'm going to
specify my username and uh let's see, it
looks like I'm already logged in. So,
it's got my OOTH token. And so now it's
going to use OOTH in order to
communicate with that server. Right? So
some of the servers do require off, some
of them require API keys, some of them
are OOTH, some of them are completely
public. It just depends on the server,
right? Uh we also have like the
Microsoft Learn Server. I already have
that one installed. That one's just a
completely public server that doesn't
require any sort of authentication,
right? So then once we have them
installed, then we can go over to GitHub
Copilot.
And here in GitHub Copilot, I want to be
in agent mode because agent mode is
where I can use tools. And then in agent
mode, I can click on the little tools
icon here. And it'll show me all the MCP
servers that I can enable. Uh so we've
got too many GitHubs. Uh I'm going to
I'm going to start by disabling the
GitHub. And so you can see on this
server there's a ton of tools. Uh the
GitHub MC MCP server is super helpful,
but it does expose a lot of tools. So
much of the time I will actually disable
the server uh unless I know I really
need it. Uh so here you can see I've got
built-in tools from VS Code. I've got
some extensions that expose tools and
then I've got MCP servers that expose
tools. So then I can ask a question
that'll use one of these tools like the
Microsoft doc server. I'll be like okay
um what support do Microsoft
products like VS Code have for MCP? All
right. So we'll try asking that
question. And now GitHub Copilot is
going to look at its tools and send
those to the LLM, which in this case is
Claude Sonnet 4.5, and says, "Hey LLM,
here's the user's question. Here are the
tools. What tools do you want to run?"
And we can see it's already started
running the tools, and it's running uh
you can see that it's running against
this Microsoft Docs MCP server. You can
expand this if you want to see what
arguments it sent to the tools. So here
we can see that it sent in this query.
We can even see the output. Uh so this
is really helpful if you really want to
understand how it's working with these
tools. Uh so it looks like it did a
couple queries uh in order to answer
that question and then we got back the
answer, right? Uh so that's that's nice
for doing doing research. Now what's
powerful about the tools is that some of
the tools can actually do things on your
behalf, right? Because remember I I
actually logged into the GitHub
extension. So let me go ahead and enable
uh the GitHub MCP server and then say um
I'll say find a stale issue from the
Azure samples Azure search open I demo
repo and um decide whether it can be
closed and I'm going to say don't close
it yet. So, here's the thing. Big
warning is that if you're using MCP
servers that have access to tools that
can, you know, make changes to your
accounts, you want to be really, really
careful when you're using them because
they have access to change stuff on your
accounts on your behalf. And some agents
will be like some LLM in particular like
the Claude LLMs tend to be very eager
about doing things for you. sometimes
they can be too helpful and uh and they
might take an action that you didn't
actually intend it to do. Right? So here
I specifically said like you know I
added here don't close it yet because
I've run this query before where it's
it's decided to close the issue for me
and I'm like no no no I didn't want it I
didn't want it. Right? So that's
generally something that you want to be
careful about with MCP servers. Uh that
you're really aware of what tools are in
that MCP server and whether those are
readonly tools or whether they can
actually make changes to things. Right?
So here you can see it's running uh some
you know uh readonly tools like
searching the issues, getting issue
details, getting issue comments, then
it's analyzing the issue and then it
says suggested closing comment. Right?
So, it suggests what to do. Um, let's
see. The user asked
and and so if I wanted to, I could
actually tell it to go and close the
issue now. Um, if I, you know, was
confident that this was a good response
or I could like modify it, right? And I
could be like, okay, great. Close the
issue. Now, I don't want to do that
because I do actually want to take a
look at the issue and make sure that
this is a good response. Uh but if I did
this, it could actually go and close the
issue because it has that MCP server has
the ability to update issues. Now what
you can do inside VS Code is that you
can customize it and you can say like
well you know I'm going to let you you
know access some of the tools right a
lot of times I give it access to all the
get tools and the list tools and the
search tools but then like I'll change
anything uh like update right update
issue right if I'm worried that it's
going to do something that I don't want
it to do. So that's something that you
can you know you can actually decide
like what tools from an MCP server are
you going to enable.
All right. So that's just how to use MCP
servers uh from you know from copilot or
you know from other other MCP hosts like
uh you know cloud desktop would be
similar.
Now we're going to start looking into
how we can actually build an MCP server.
Uh so first it helps to understand the
components right so uh the big component
is tools you know we just saw lots of
tools being used from those MCP servers
and those tools are you know just the
useful functionality that a server can
expose uh now servers can also expose
resources which you can just think of as
data right some sort of data that's
that's helpful that don't actually
require like really parameters so much
and they can they can expose prompts
right so if I go back to VS code and I
click
um slash I can see here a bunch of
prompts that are exposed by the MCP
servers. So, GitHub actually they build
prompts into their MCP server like this
one to assign to a coding agent, this
one to um fix a a workflow, right? And
so I could just, you know, click that
prompt and then uh and then it's going
to ask me some for some parameters to,
you know, insert into that prompt
template and then it'll actually write
the prompt for me.
uh and you can also use resources. So if
you do add context and then click MCP
resources, then you can see all the
resources that are exposed by a server.
So uh here we can see that the GitHub
MCP server exposes resources which are
the actual like bits of content in your
repo. So you could grab the content of a
specific path and just have that be the
context that's referenced in your
conversation. So most of the times when
I'm using MC server, it's all about the
tools and using those tools. But you can
also expose resources and prompts and
use those uh when you're, you know,
using a server that exposes them. So
it's helpful to know that MCP servers
are more than just tools.
So now let's actually build a server
that exposes tools, resources, and
prompts.
Now, when you're building a server in
Python, there are two main SDKs that you
can consider using. They're very
similar, so it can be a little
confusing. Uh, the first SDK is the
official Python SDK, and it's designed
to be fully compatible with the protocol
spec specification since it is from the
the MCP team and it includes a and it
includes a like a nice MCP server in it.
The second SDK is the fast MCP server
and it's not, you know, it's not from
the like official MCP team, but it has a
ton of features and does stay really up
to date with the spec as well and it has
more features on top of the official
Python SDK. Now, what's very confusing
is that the official Python SDK actually
integrated version one of MCP. So you
will see fast MCP inside the official
SDK
uh as well. Um but if you want the
latest and greatest version of fast MCP
then you should use that package
directly. Uh so that can be really
confusing. For today we're going to go
with fast MCP version two which is the
standalone fast MCP package.
Uh so in this package we'll see it's
very Pythonic. We can use Python typins,
Python decorators. It's uh we can do
async, we can use annotations and uh we
can you know run the server in multiple
ways. So let's see how we do it. Uh
we're going to go back to that code
space and see it's all set up now. Let
me make it way bigger.
And I'm going to go to my VS Code folder
and click on mcp.json.
So here we have a configuration for what
MCP servers are associated with this
project in particular. So here you can
see this expenses MCP server. Uh it is
using UV in order to run a file. So that
so basically when you're making servers
you have two options. You can actually
be like running uh local files. So you
can be using UV to run Python files or
if you were like developing in
JavaScript you'd be using like uh you
know like Node MPX. Uh in Python we're
generally going to be using UV which is
a way of running Python files. So we're
like okay use UV to run this file here.
So what I'm going to do is click the
little start button here. Make it even
bigger. So say start.
And now it says it is running. And I can
even like look at the um output from the
logs here
and see that it says it discovered one
tools. So that's good.
And then I can look at the actual code
that it's running. Right? So here this
is the code it's running. We can see
that we are using the fast MCP package
and this is an expense tracking uh
server. So it is using it's it's just
doing local expense tracking. You know,
we're starting simple here. So, we've
got this expenses.csv. It's basically
going to use this as the database and
it's going to save expenses here and
reference this for everything. Uh, so uh
what we have is a tool and this tool
this is the main one or this is the only
tool it exposes. It's called add expense
and the tool takes five arguments and
you can see for each of the arguments
each of them has a type
and has a description. Uh so best
practice you because you could just do
date
and the MCP you know the MCP client
would see oh okay it's a date type but
best practice is to always annotate your
arguments so that this description gets
sent as well and that just helps the you
know the LLMs to decide uh how to use a
particular argument because a lot of
times your your LLM your agent is going
to be figuring out what arguments to
pass to these tools. And so you want to
give it as much information as possible,
right? You want to give it the types and
you want to give some additional
information that'll help it format it.
Uh so here we've got date, float, we've
got category, which is an enume, a
Python enum. So that's like, you know,
one of some several values. Uh we've got
a string, and we've got another enume.
And basically, this file here opens the
expenses.csv CSV and adds to that file.
So, let me go over to my uh GitHub
copilot. And in the code spaces, it does
default to ask. Uh we're going to want
to change that. Um and I'm also just
going to make sure I don't have any
context open because I don't want it to
get confused by the by seeing the file
itself. So, I'm going to close all the
files. I'm going to change this to agent
mode. Uh the model doesn't really matter
so much. You can pick your favorite
model. They should all work fine. And
then I'm going to check to see which of
my tools are enabled. So when I open up
this little little tool thing here, uh I
can see here which of the MCP servers
are enabled. And I want to make sure
that the first one is checked. Uh
because we're going to see a couple
different ways of writing the server. So
for for here, I want this is the one
I've got running, which is just
expenses. MCP. So I just make sure that
that is checked. Okay. So, all that
looks good. And now I can say like, "Yo,
I bought uh $30 worth of sushi last
night with my corporate MX card."
All right. So,
uh what we can see is that it
immediately realizes it should use that
add expense tool and it suggests the
arguments to use, right? It says like,
okay, we're going to do 30, you know,
for the amount, category, food, here's
the date, here's the description,
and here's the payment method. All
right, so we're going to see, we see
that it's asking us to allow it. Uh,
this is what Git Copilot does when
you're using a tool for the first time
is that it generally asks if you're
going to allow it. So, in this case, I
will say I'll just say always allow.
And now that means every time it wants
to run that in the future, it's just
going to let it go through. Uh so it
says, "Okay, it worked. It, you know, it
got added um to the CSV." And if we look
here, we can see that it got added to
the CSV.
Uh so that is it using the tool, right?
It went through and it ran all this code
for those arguments there.
So what else can we expose from our MCP
server? We also can expose resources.
And so when we do a resource, the way we
expose it is we say uh at mcp.resource
and then we give the resource what looks
like kind of a an identifier like a path
identifier. So we say you know resource
slpenses
and then we have a function and that
function needs to return back a string.
Right? So in this case what it's doing
is it's pulling it's opening up that CSV
and then it's returning back all the
expenses data in a way that uh you know
the coding agent can use. So if we want
to reference this resources what we do
is click on add context
and uh click MCP resources
and click on the resource expenses and
then we can see that it's attached it to
the conversation. So this is similar to
when you're like you know attaching like
a file uh to your conversation with
co-pilot right you're saying like I
really want to reference this bit of
data here. So then I can say like okay
yeah summarize um my MX expenses in a
list. All right so it's going to
reference that expenses resource and uh
and you know make this nice list here.
So that's a resource. It's a bit of data
that your server can expose.
All right, what else do we have? We can
also expose a prompt. So here we're
saying mcp.prompt
and we give a name for a prompt. And you
can even have arguments for your prompt.
And so these will, you know, these are
basically things you could use as
template variables for your, you know,
for the prompt that you output, right?
So in this case, we're saying, oh, if
you want to, you know, tell me the
category and start date and end date.
And then we basically return a string
and we you know dynamically generate
that string based off of you know any
whatever they entered. So let me go
ahead and use this prompt so you can see
how we would use it. So once again in
order to use prompts in GitHub Copilot
you say slash and that will pull up any
of the prompts that are exposed by the
um MCP servers and extensions. So I'm
going to use this second one here
because that's the one from this current
server. And and you can see it pops up
this thing asking for the arguments. And
these are optional. So I can just leave
them uh leave them optional, but I'll
just say I'll say food for that. And
then I won't enter those. And it says
please analyze my sending patterns and
provide blah blah blah. Right? So you
can see here it is a fancier version of
the analysis that I asked for before.
And it's kind of nice to just always
have that prompt available if you know
if it's something that you expect people
to be doing a lot with this MCP server.
So here uh it you know it basically it
sends the prompt and then uh you know
and then answers answer all of this. Uh
it says I should limit my high-cost
meals to special occasions. Well, I want
to eat sushi every day. Actually, I
learned how to make my own sushi now, so
I can't eat it every day. It's great.
All right. So there, as you see, we were
able to use tools, resources, and
prompts, right? So once again, if you're
going to tool, you're going to do
mcp.tool and make sure that you specify
the names, types, and descriptions of
each of the arguments. You also want to
add a nice dock string to your tool
because that's going to get passed as
well. Like basically, you want to assume
that all of this information here is
going to get passed to the agent to
decide how whether to use your tool and
how to use your tool. So you want to
make all of this information in the
function signature be really really
helpful.
Uh and then we had our resources which
is useful for data and we had our
prompts. Okay. So that is our basic
Python MCP server built with fast MCP
SDK.
We saw these. Now this one I was running
this all using that UV run command.
Right? This is what's known as the
standard input output transport. There's
two different ways you can run MCP
servers. You can run them as these local
files. Um, which is convenient if you're
developing and it's just something for
you and that's just going to, you know,
and and you have the ability to, you
know, run UV and and have that set up.
Um it can be a little annoying sometimes
because you have to make sure that
you're like your you know your system
can run Python correctly and um that
you've got all the required packages and
and all that stuff. Um so that's mostly
been you like used for development and
bugging. Now most MCP servers are
actually run using HTTP there um uh and
HTTP with with streaming. And this way
what we do is that we just run a server
right and then that server exposes its
tools at particular endpoints and then
when we're connecting to the server we
just say hey here's the server connect
to that server and get all the tools and
resources and prompts from that server.
Uh so in practicality I think a lot of
the MCP servers that you'll be building
will probably be using HTTP.
So how do we actually serve MCP servers
over FTP? when we're using fast MCP uh
we're just going to change a line. So
let me show you how we so with the b the
standard input output we just use
mcb.run and that defaults to standard
input output. But if we want to change
the hp then I'm going to open up this
other file basic mcp hpt and then scroll
to the bottom. And what you'll see here
is that it says mcp.run run transport
equals HTTP
and here's the host and and here's the
port. All right, so it is this is going
to run a HP server on port 8000.
Uh so then how do we actually run this?
Well, we need to actually run this file,
right? Because we need to get this
server running. So, I'm going to say uh
u run basic mcp http py.
And we can see that it does say that the
server is running on 8000.
And we can also see that in our uh let's
see, is it in our ports tab? Oh, it's
not showing up on our ports tab. Okay.
Uh we can go and say open.
And that should so that opens up the
server. And it says not found. That's
that's fine. That's what you're supposed
to see is not found. That's that's what
you see at the root of the MCP server
domain. Uh so that is our server running
there.
And now we need not like how do we
actually you know use this in our you
know with our with our copilot. So we
can go to mcp.json. I'm going to stop
the old server the one that was over
standard input output. So I'm going to
stop that one. Then I have this server
here. And you see this one says type
HTTP localhost 8000/MCP.
So that's the MCP endpoint that exposes
everything. So I'm going to say restart.
Okay. And it says discovered one tools.
That's good. So that means that it is it
is running. So now it is actually
connecting to my server. And if I look
back at my server logs, I can see that
it got a request for listing tools. So
you can see all these requests came in.
So those requests came in from VS Code
which was like checking to see like hey
is this server up? Is it alive? Can you
know tell me what tools you have? Right.
So it listed it asked for the tools and
it asked for the prompts. So now we can
go back over to our um our chat and
check which servers we have enabled. So
here I have uh it's got everything
checked. So, I'm going to uncheck most
of these and make sure that only checked
this last one here. Right? And it's
important to because all these servers
have very similar descriptions and
tools, right? If I had all of them
enabled, it would get confused and try
to use the, you know, one that wasn't
running. So, I want to make sure that
it's got the one selected that I wanted
to use. All right. And then I somebody
mentioned avocado. So, let's say, all
right, I got an avocado
sandwich for
$30 in San Francisco
on my Corpamics yesterday, [laughter]
which is definitely something you would
do in San Francisco. Uh, once again, I
know how to make my own avocado
sandwiches at home, so I don't have to
pay $30 in San Francisco, but, you know.
Uh, so here we go. So, it ran uh on the
server. Um, actually it ran on the old
one. How did it run on the old one? Uh,
I don't even know how it managed to run
on the old one. I'm going to double
check and make sure. Ah, it auto
started. Okay,
not cool. Uh, for some reason it has
decided to stop start all of mine even
though we have a setting that says don't
auto start. So, I'm going to I'm going
to just
uh stop them again. Make sure we only
have the one server running because once
again, if there are multiple servers
running that all look really similar,
it's just going to pick the first one
it, you know, randomly decides on. All
right, I'm going to unselect. Unselect.
Press okay. Check again. Make sure they
are not secretly running. Check my tools
again. Okay, feeling good. Um, and let's
see. Uh, I got another uh I got let's
say avocado sushi
for 20 bucks on my Amx today. All right.
So,
uh, this time it's you can see it says
expenses MCP HTTP. So, that means it is
using the HTTP version of the server.
And we should see it get processed in
our server logs here. Uh, so that looks
good. I will allow
and we can see it says processing
request of the call tool request and
adding expense. So this is a log for our
server. Uh so we can see that it did go
out to our server and that our server
processed it. Okay, great. So now we
have uh really similar code running over
the server. just the difference there
was really just the last line and and
then we had to you know run that server
explicitly oursel to make sure that the
server was up and receiving at that
port.
Now we're going to give you a few
development tips for MCP servers.
One is that there's this great tool
called the MCP inspector. Uh this is a
tool you can run with npx. Now it's
easier to run it on your own machine. Uh
unfortunately there are some issues
running it in inside code spaces due to
security and cross origin issues. So
we're going to recommend running this
one on your actual machine. Uh so let me
actually run the command. I think I've
got it here npx. Okay. So here I'm
running the latest version of the
inspector and I'm going to press enter.
And that's going to start up the
inspector and open it up here. And so
now this inspector is running and I can
say what sort of server I want to
connect to. I'm going to connect to an
HP server. And then for the URL, what
I'm going to do is go to my code space
and um let's see. I want to make the
8000 port. I want to expose the 8000
port. Uh so let me see if it's going to
offer to expose that.
Okay, it's not showing up in my ports.
Oh, let me try adding it. 8,000. Okay.
All right. So, it's not letting me
expose it. All right. Some reason it's
not exposing the ports. Uh, so I won't
be able to show you with the code
spaces, but let me go ahead and run it
locally as well. Uh, okay. So, I do have
it running locally on 800 MCP. Let me
just make sure. Okay. So, that looks
good. So then I'm going to tell the
inspector to connect locally. Uh it is
now connected. So this is connected to
uh the server that's running here in my
local VS code. And once we have it open,
we can now explore the server. So we've
got resources, prompts, tools. Uh we've
got a bunch of other things we didn't
even talk about. There's ping, sampling,
listations, roots, and off. Uh so
there's a lot of things you can test out
here. Today we're focusing on the first
three, right? So we can say list
resources and we can see the get
expenses data. We can call it and get
back a response. Uh we can list the
prompts. See that prompt and uh we can
get back the prompt back here, right?
And see what it will look like. And then
tools, that's the one that people are
using the most, right? We can list the
tools and uh fill it out. 2025
0801. Amount $30. Category. Um, for this
one, we have to put this in quotes
because they're still working on their
dropdown support for enumes. Uh, so
we'll just put it in quotes. Sushi
dinner payment method a. Oh, that should
be MX. This should be food. Okay. And
then I'm going to run the tool. Uh,
let's see. Oh, yeah. MX. I changed it.
It should be food. Food. Didn't I change
this to food? All right. Let's see.
There we go. I added the expense. Yeah.
Uh so you can this is another way that
you can test out your MCP servers and um
and just be able to interact with it.
It's just a tool for developers, right?
Um but it can be really handy to use.
All right, so that's one tool you could
use. Another tool you could use is that
you can actually do breakpoint debugging
in VS Code of MCB servers. So that's
that's really cool. I'm always excited
when we uh when we can actually use
breakpoint debugging because if you do
when you do actually have errors in
Python code uh you know one approach is
you put a bunch of print statements in
your code but that can be painful to do
if the code if the error is quite hairy
and and difficult. So what's really nice
is if you can actually use a breakpoint
debugger in order to step through the
code, right? Um, so how I'm going to do
this is let me close down the HTTP
server and then I'm going to add a
breakpoint. So to add a breakpoint, I
click right here uh in the, you know,
the stdo file. I click here to add a
breakpoint. So you see the little red
shows up. So that means when I am
debugging this file, it will stop at
that point and let me step through it
uh, you know, one by one. So we've got
the break point. Uh then what I'm going
to do is go and um stop that that
server. I'm going to start up the debug
version of this server here. And so I
have to give it a few additional
arguments so that it's able to be
debugged. So now that's should be
running.
And then
I'll go over to my debugger and run
the debugger. So now you can see the
debugger is up. You can see this like
kind of step-by-step view here. And now
I'll go over to my agent mode and make
sure I've got the debug server enabled.
And then say like, okay, 20 bucks of
pizza on my MX today. All right.
So, let's see. It wants to add and boom,
it did it. Uh, I'm so excited when this
works. So, it had stopped at the
breakpoint. And now we can see this
really nice overlay. we can see exactly
what is coming into our function, right?
We can look at the Python types. We can
watch variables, all the things you
would expect from uh breakpoint
debugging and then you can step line by
line through this code and see exactly
what it is doing and how the variables
are changing over time and then you know
when you're happy you can just play
through to the end and and see it go. So
that is those are two big tools that you
can use would be the MCP inspector and
breakpoint debugging.
Now let's the question get to the
question a lot of people were asking
about is how can you point your agents
at MCP servers right we had a talk
yesterday about agents and uh we showed
how you could build agents using agent
framework using lang chain using pantic
AI there's so many great agent
frameworks out there and you can of
course point those agents at custom
tools but you could also point them at
MCP servers that expose tools and that
can be a common way of connecting agents
to MCP servers
So the first example we're going to look
at is the Microsoft agent framework
pointing at a hosted MCB server you know
like an external MCB server.
Uh so that example is here in the same
code. So go to agent framework MCPLARN
and um you know so here we have to
configure our our model to point at
wherever our model is hosted. So by
default that's GitHub models which is
free inside GitHub code spaces. So you
should be able to run it [snorts] and
then we have the actual code. So what we
do is we say all right we are connecting
to this MCP server. We pass in you know
a name a URL uh headers actually we
don't need the headers for this one.
[laughter] That's why it says barrier
token. Let's remove the headers there.
We don't need the headers because it
this is a public MCP server that doesn't
require authorization.
Uh and then we have our agent. And the
agent is um a you can either you have
two different ways. You can actually
pass the MCP server directly into the
agent or you can pass the MCP server to
an individual query. So in this case,
we're actually passing the MCB server
into just this particular query. So you
can do it either way. Depends if you
want the agent to always have access to
that server, right? In that case, you
could be like this. Tools equals MCP
server or you pass it specifically into
the query. So let's let's run this here.
And it is running here. and uh it's
going out and it is you know using that
MCP server to get information and then
responding. So we can see that it comes
back with an answer and it says for more
information check the official docs. So
this is really nice because now we're
able to get answers to questions that
have citations, right? Uh, and that's
particularly useful for documentation
because a lot of times if you're just
going to use ask an LLM about some
documentation question, it'll just go on
its outdated knowledge and you know
things move so quickly these days like
we really want to get the most updated
knowledge. So giving it a documentation
server where it can actually go out and
get that information and then site where
it got that information from is really
really helpful.
Uh, so there we go. So that was running
against a hosted server. Now, how would
we run it against a our own like local
server, right? Um, so with lang chain
here, I've got a lang chain example and
this langchain example is running
against um this localhost 8000. So that
means I need to run that server again.
So let's go ahead and run that server
again. Make sure it's exposed on our
8000 URL. Okay, so now it's running. So
we're going to connect to that server um
down here. So we say we're making an MCP
client that can connect to this server
over HTTP
and we then we pass the tools into our
agent here and it says yesterday I
bought a laptop for $1,200 using my Visa
and then we send that message to the to
the agent. Okay. So let's go ahead and
run this code
and uh we should see
uh yeah. So we can see stuff happening
in our server, right? So we can see that
it is trying to uh work with our server
and we do see that the request went
through. It added that expense to the
expenses, right? And then we also see
the response here. So that's how you
could point an agent at a at your own
server, right? And you can specify
authorization. So if you did have if
your server was protected with a key or
with a um some sort of token, you could
pass that in as additional headers to
that MCP. Uh so both the you can do
these things with any of the agent
frameworks, right? I' I've done this
with agent framework, with link chain,
with pedantic AI. Uh so whatever
framework you're using, you should be
able to connect with either a hosted
server or a local server and you should
be able to pass in headers in order to
control the authorization.
Now one tip that I want to give is to
consider filtering the tools from MCP
servers. Right? I showed earlier how we
could do that in copilot is to you know
make sure that we're only giving it
access to the you know tools that it
needs the readonly tools that are going
to be useful for something. So we could
do the same thing when we're pointing
agents at servers. Um so the example I
have is is using lang chain. Um, so this
is uh here we've got some link chain
code and this one is connecting to the
GitHub server and so you remember that
one does take in a token. So here it's
grabbing the GitHub token that's inside
our GitHub code spaces
and um then it's getting all the tools
from the server and filtering down the
tools to only four uh only three of the
tools. So I'm only giving access to
three of the tools
and then I'm, you know, sending off my
requests, right? So I said find popular
Python MCP server repositories. Okay.
So uh so let's try this out and and see
what it decides to do. So what we can
see is that it says these are the
filtered tools. It has a bunch of block
tools and now it's going to actually run
the query um with just those filtered
tools, right? Uh so we got a bunch of uh
results here including the one that
we're using today uh and some other
popular MCP servers, right? So really
want to encourage you to consider
filtering tools from the servers. That
way you're going to reduce the amount of
tools that you send, you know, that you
throw the LM because if you set if you
show the LM you've got 97 tools, it now
has to decide between 97 tools. And
that's that's a big decision, right? So,
you're going to help it make decisions
better if you know ahead of time you
only need some of those tools. You're
going to reduce your context window, the
amount of tokens you're using, and
you're going to speed your up your
request by a huge amount because you're
not going to be sending it as much
information, right? So, if you know
ahead of time that you only need a
subset of the tools, filter down those
tools to only what's needed.
All right. So now we've seen how to
build our own servers, how to use other
people's servers, and how to point
agents at MCB servers.
So now what I want to talk about is that
some people are like starting to use MCP
as like a a way of organizing the
architecture of like your internal
servers within an organization, right?
So if you've got if you're at a company
and within your company you're seeing
like lots of people that are starting to
build agents and build common tools that
um you know are shared across the agents
then your company might consider like
actually using MCP as the internal
mechanism where you know multiple agents
from different teams could actually be
using each other's functionality right
so it's basically like microservices um
but you know for your for your AI agents
right uh and I know like I went to a
great talk from um Uber and Uber has a
whole team that is in charge of um their
MCP like internal MCP server MCP
registry like helping people across Uber
build agents on top of shared common MCP
servers within Uber right so this is a
you know a pattern that's starting to
emerge at organizations is like you're
building internal AI agents you're
building internal functionality for
those agents might as well expose those
over MCP have you know you know internal
MCP registries in order to find that
out. Uh so what would that look like? Um
if you were going to you know do this on
on Azure you could actually like be
deploying these MCB servers to things
like Azure functions. Uh your host might
be Azure container apps. You could put
them all within the same virtual network
and so they would all you know be
communicating safely within a virtual
network. uh you could also of course uh
put them behind authorization as another
layer of security right so that's what
this sort of architecture could look
like uh we do have an example template
which is our Azure AI travel agents
template which uh uses this sort of
architecture and so it actually runs MCP
servers in multiple languages so in
Typescript Java Python and Go I think
yeah those are the the lang all the
languages we have um and that's the
interesting thing too is that when
you're using MCP
You could have different servers written
in different languages because you know
that happens at companies definitely,
right? You use the best language for the
job or the one the team knows and you
could have agents built in different
languages too and then they can all
communicate over over MCP.
So that is something to consider in
terms of how you start to adopt MCP
within you know larger organizations and
companies.
Now just want to talk briefly about how
you what you want to consider if you are
putting MCP into production, right? If
you're exposing an MCP server to the
world or even internally um within an
organization.
So the first thing you definitely want
to consider is private networking,
right? Um if you are deploying to Azure,
we have a lot of private networking
options, right? So you could put
everything within a virtual network and
that makes sure that the communication
can only happen within that network. you
know, you can set up networking rules to
say what uh ports and endpoints are
allowed, right? Uh so that would be, you
know, a really good approach if all if
your MCP architecture is all internal,
uh and you don't have to you don't need
to expose it out to the world, right?
If you are going to be making a public-f
facing server, uh then many of you are
probably going to think be thinking
about authorization. And this is where
you need to be really really careful. Um
I did say that MC so of course you could
do API keys. Uh API keys could be fine
depending you know if if that works for
the kind of API you're exposing but many
people are using MCP servers to expose
user specific functionality and that
means you need user O right you need
OOTH in order to identify the user and
then do things on behalf of that user.
So if you check the MCB author um
specification you'll see that it builds
on top of OOTH but it requires
additional functionality from OOTH that
is not available in all OOTH servers. So
for a very specific example, Microsoft
Entra does not actually support
everything necessary for MCP O, which is
really sad for those of us who work at
Microsoft and um you know hopefully they
will come to support in the future but
uh it's just some additional OOTH
extensions that Entra has not
implemented. So if you were needed to uh
you know put a MCB server using
Microsoft Entra, you would have to use
what's called an OOTH proxy in order to
do that safely, right? Uh if you're not
using Entra, uh then you just want to
make sure that whatever identity server
that you're using that it fully supports
everything in the MCP O spec uh and not
just plain OOTH. A great example of this
is the open source server called
Keycloak. Uh, and that does support
everything necessary. Uh, so I know some
people that are using Keycloak. Um, but
there's also some other identity
providers that do support what's needed
for the full MCP off spec. Yeah, we're
just going to sit here and beg Entra.
Come on, Entra. Come on. Uh, maybe if I
like, you know, publicly shame in the
video [laughter]
could it work? Um but you can there is
an ooth proxy being set up for Entra in
the fastm package. So you can stay tuned
for that. Um it is a it a viable
approach. So we'll you know we'll we'll
keep you updated about what is the right
approach to use if you are using Entra.
Uh because you know we know lots of
people do want to use Entra um for
servers. Uh so we're going to keep up to
date on what approach is viable there.
Generally at Microsoft we are really
full in on MCP. I know I just said we're
a little behind on the entree front but
we have so many so many teams working on
MCP and adding support for MCP to the
product. So if you look all across
Microsoft you will see support for it in
many of the products. Um, and I think
you'll find this at lots of companies
that everybody's going all in at MCP and
figuring out uh, you know, how to add it
because once you add MCP support to a
product, then you multiply the, you
know, the power of that product because
now, you know, anyone can take any MCP
server and plug it in and then boom, now
you've got something that's much more
capable. Once again, you do want to be
careful about what you're what MCP
servers you're using and what tools
you're giving access to and keeping in
mind security risks uh that you could be
subjecting yourself to.
If you do want to learn more about MCP,
we have an MCP for beginners curriculum
uh at this GitHub URL uh that goes
through everything and has lots of
articles. Uh we've got more templates
that show you how to deploy MCB servers
to Azure. Uh this is one that I've
deployed before which shows how to
deploy an MCP server to Azure. It
actually does add some um authentication
on top with Entra. Uh and it which does
actually work, but it only works with VS
Code. It doesn't work with um other
clients. Uh so uh it shows you at least
how you would do Entra off in VS Code.
And then we've got tons of MCP servers
at Microsoft. So we have the Microsoft
Learn MCP which we saw. We have GitHubs
MCP. We also saw that one. There's also
the Azure MCP server if you're using
Azure resources and want to ask
questions about them or act on them. And
then the playword MCP server is super
useful for doing any sort of browser
manipulation.
So uh we now have three minutes left and
that is what I wanted to share about
MCP. I am going to check the chat now
and um see what questions we have. Uh
any plans for an MSMCP
UI framework? All right, let's let's
talk about MCP UI. That's that's pretty
cool. Um,
the idea about MCP UI is to standardize
UI on top of MCP. And the this is like
imagine a world where instead of having
websites, we doing everything in uh chat
clients like chat GBT and copilot and
stuff like that. And in that case, like
you really want a really rich UI so that
you're not just using text like you've
got like buttons and dropdowns and stuff
like that, right? So if that world comes
to fruition, which it could, right? Like
maybe websites aren't even a thing
anymore, right? Like maybe two years
from now we're not dealing with websites
and we're just using these chat
interface for everything. And you could
argue that would be like a better world,
right? Because right now with websites,
whenever you go to a website, you have
to like figure out how to navigate it.
if everything is exposed to MCP servers,
well, you just type in some text and
then, you know, the server can help you
out. So, there's some that argue that
that might actually better be a more
consistent user experience for people.
Uh, so the idea with MCP UI would be to
uh make these UI components. Uh, let me
see. I have never actually tried this
live demo. Uh, an error occurred. Okay.
[laughter]
Um, but you definitely check out MCB UI.
I don't know particularly any like plans
for Microsoft here. We might have plans.
If we do, I don't know about them. Uh,
but it is it's a very exciting thing to
to think about with MCBUI. Um, I do
really recommend the Kent Dods talk.
There we go. Kent Dod's future of user
interaction. I really like this talk. He
laid out that that whole vision. Um,
sorry that was a poor link. Maybe maybe
Anna can can clean up that link for me.
Uh but he like really lays out the
vision and includes uh some demos of
MCPUI.
Uh okay. So I also have a question any
certification we can go to after these
series. What do we recommend? Um well so
it's not a certification but I do you
know I do recommend the MCP for
beginners uh repo here. Let me check to
see if we do have any certifications
around MCP. So let's see Microsoft learn
certifications.
Uh, and if anybody else knows, do um
mention in the chat. I don't remember us
having one for MCB. No. Um, I think the
one that lots of people recommend, I
feel like it's called like AI. Oh, maybe
this one as your AI engineer associate.
So, I think this would generally be a
good one um to to get after going
through this series. Uh, that is the uh
as your AI engineer.
Uh, so check that. Let me let me uh post
it here. And if anybody has their own
recommendations for um other
credentials, Gwen mentions AI102.
AI102.
Let's see what that one is. Let me just
search for it. Um Hoft AI 102. 102.
Uh that one also looks Oh, I feel like
that's the one I just linked to. All
right. Yeah. So, good. That's the same.
Um so, check that one out. That one
could be could be helpful.
Uh oh, let's see. Let's check out
Oh, yeah. We've got a bunch of bunch of
resources uh just generally um for
Python and AI. So, Anna's going to add
that link to the chat. Oh, we've had a
recommendation for AI 900 Microsoft.
Okay. So, doing Oh, Azure AI
fundamentals. All right. So, definitely
this one first.
All right. All right, we're going to do
all the link drops right now. All the
link drops. Um, and we are streaming to
multiple channels. So, that's why you'll
see like sometimes I'll enter something
in like the YouTube chat and then we'll
post it out to all the channels, right?
Because we stream these series to
YouTube, LinkedIn, Twitch, Tik Tok, like
all the places. Uh, so, uh, we've got
questions from all over.
Uh, let's see. I see there's a question.
How is agent agent different from MCP?
Uh, it's a great question. I actually
have not played much with agent agent.
So maybe someone else who has more
experience can comment on it. This was a
protocol that came from Google and is
now part of the Linux foundation and
it's about agent interoperability. So I
believe that it just exposes things um
at like a maybe a higher level. I have
seen like agentto agent examples. I was
poking around at that in the agent
framework because it uh the Microsoft
agent framework does have support for
agent to agents. So if we look at the
example like sometimes for me the best
way to understand something is by
looking at an example of it right. Uh so
let's see agents A to A. Okay. So A to A
agent with A to A. Um, so here we can
see that uh it it's like connecting to a
host and uh it looks like the agent has
to define this like well-known
agent.json and and then that would like
describe the agent and then you can
create an agent based off of what you
get back from that information and then
that presumably that agent exposes
tools. So that's as much as I know about
agent agent. So I know we have support
for an agent framework and it looks like
it means that when you like make your
you know agent it needs to expose
information about itself. Uh but I don't
really know how to compare it to MCP in
terms of what functionalities those
agents expose.
Uh let's see if we have
any uh okay this is a good question from
Pablo. Are there protections available
for AI agents that use MCP servers to
avoid agents performing unwanted or
dangerous actions with MCP servers?
That's a great question. So the thing is
um in the like uh if we look at like MCP
itself
um and uh the um when we're defining our
tools, I want to show the tools tools
tools tools
uh D
or components tools. Okay, tools
and then we have
decorator arguments. Okay,
I'm looking for this one. Okay,
so you can in the tools you can actually
give a hint as to whether the tool is
readon, destructive, item potent and
open world. So the idea of these is that
tools could document their, you know,
their level of their ability to to wreak
havoc, right? Um, now I would say I
wouldn't trust these tools unless they
were like internal and they'd been like
there was like a good vetting system for
them. Uh but I did want to point out
that you know there there has been
thought some thought like oh maybe you
know this is something we could actually
like specify like hey these are all the
readonly tools these are because
certainly if a tool does say it's
destructive and open world you do want
to be very careful right however I'm a
little concerned that if a tool doesn't
say that you shouldn't assume that it's
that it's not right like any tool even
if it claims to readon you don't
actually know if it's read only unless
you have inspected that code like
incredibly carefully right because you
can even like do a fetch to a server and
pass information to that server and then
boom like you've conveyed information to
the world, right? So you can like you
can assume that if something says
they're destructive, you should probably
trust them on that point. But it even if
they say they're right read only, I'd be
careful about that. But maybe this is
something you could use internally,
right? So that's just something I wanted
to point out that that is um something
you can decorate uh a tool with and and
actually declare that. Um now when
you're using it from from frameworks uh
then once again I would recommend the
filtering. Uh so with um with the agent
framework in particular um what you can
see is that they have a few different
interesting options here. So there's
allowed tools. So I could pass in and
say these are the only allowed tools. So
that's basically how we do tool
filtering in agent framework.
And then there's also an approval m
mode, right? And that's something you
you you want to think about anytime
you're building an agent is do I always
like am I do I always want to allow that
agent to take any action or there
particular tools that I want to mark as
requiring approval. Uh so we see that
here we also see that in langchain if we
go over to uh the lang chain docs uh to
the new docs and look at the middleware
the human in the loop middleware um here
you specify and say okay for this tool
uh I'm going to you know allow these
things um I'm going to have auto approve
for this right so both of these
frameworks have this uh notion that it
should be easy to specify which tools
require approval and to bring a human
into the loop if something does require
approval, right? Uh so definitely look
into that and you know just think really
carefully um about um you know what
you're going to do, right? Like when I
made my agent for traging issues, I only
gave the agent the original agent access
to readonly tools and then be and then
and then I you know ra it said like okay
now it's time to bring a human in the
loop and the human would have to click
the button before it had any access to
update issues at all. Right? And that's
the way I like to do it.
All right. So, I think we've answered a
good number of questions
here. Um, uh, now I see some more
questions coming in. Uh, but we we
should wrap up for today. I do have to
head over to San Francisco again for the
second day of PyTorch and help answer
questions from all the folks there. Uh
if you do have any additional questions,
you can post it either in the discussion
thread here which the one that has all
the resources and I know people posted
questions yesterday and I haven't had a
chance to reply yet. I will get to them
um after after the conference. Uh so you
can keep asking questions here and I I
will I will answer these questions. Uh
so you can post questions there. You can
also come to office hours. So we will
keep having office hours on Tuesdays in
Discord. So come to next Tuesday's
office hours and bring lots of questions
there. We very often end up talking
about MCP and office hours like
sometimes we're like oh maybe we should
have like an MCP only office hours but
it's a very hot topic right now right?
So uh you can definitely bring any MCP
questions there. Uh we can also like
explore new things like A2A and you can
you know bring bring all your questions
there. So that is the end of our series
at least for our English series. Uh if
you want a little bit more action, we
will have the Spanish version of MCP
coming up this afternoon with Gwen. And
so that would be like our final final
video in the series. Uh it's been really
great. I love all the questions, all the
comments, all the great vibes in the
chat and the office hours and the
discussions. So I hope you all enjoyed
it. If you do have ideas for more series
that we should do, more topics, uh, you
know, post those ideas here in the
comments, post them in the thread, in
the Discord, you know, just find a way
to let us know, uh, so that it can go
into our our planning. I think it's
super fun to put on put on these series.
Uh, it's a great way for me to to learn
these things as well and, uh, to really
dig into it and help share all this
knowledge. All right, thank you so much
everyone. I hope you have a great day.
Bye.
Thank you all for joining and thanks
again to our speakers.
This session is part of a series. To
register for future shows and watch past
episodes on demand, you can follow the
link on the screen or in the chat.
We're always looking to improve our
sessions and your experience. If you
have any feedback for us, we would love
to hear what you have to say. You can
find that link on the screen or in the
chat. And we'll see you at the next one.
>> [music]
[music]
>> Hey.
>> [music]
[music]
In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point AI agent frameworks like Langgraph and Microsoft agent-framework at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer. š This session is a part of a series. Learn more here: https://aka.ms/PythonAI/2 Chapters: 00:06 ā Welcome & Housekeeping 01:03 ā Series Recap & Final Session Overview 02:09 ā What is MCP and Why It Matters 04:09 ā MCP Architecture: Servers, Clients, Hosts 07:10 ā Using MCP Servers in VS Code with GitHub Copilot 13:00 ā Demo: GitHub MCP Server Authentication & Tool Access 17:45 ā Building Your Own MCP Server in Python 20:55 ā FastMCP SDK vs Official SDK 26:15 ā Demo: Expense Tracker MCP Server with Tools 27:42 ā Exposing Resources and Prompts in MCP 30:36 ā Running MCP Servers via HTTP 34:53 ā Debugging & Inspecting MCP Servers 42:12 ā Connecting AI Agents to MCP Servers 47:04 ā Filtering Tools for Safer Agent Interactions 50:04 ā MCP in Enterprise Architectures 52:27 ā MCP in Production: Networking & Authentication 56:02 ā MCP Resources, Templates & Microsoft Support 58:35 ā MCP UI Framework & Future Possibilities 59:30 ā Certifications & Learning Paths 1:02:00 ā Agent-Agent vs MCP Protocols 1:03:11 ā Security & Tool Permissions in MCP 1:07:12 ā Wrap-Up, Office Hours & Final Thoughts #MicrosoftReactor #learnconnectbuild [eventID:26300]