Loading video player...
Hello. Hello everyone. Thanks for
joining us for our next live session.
I'll be your producer for our session.
But before we start, I do have some
quick housekeeping.
Please take a moment to read our code of
conduct.
We seek to provide a respectful
environment for both our audience and
presenters.
While we absolutely encourage engagement
in the chat, we ask that you please be
mindful of your commentary, remain
professional, and on topic.
Keep an eye on that chat. We'll be
dropping helpful links and checking for
questions for our presenters to answer.
Our session is being recorded. It will
be available to view on demand right
here on the Microsoft Reactor channel.
With that, I'll turn it over to our
speaker for today. Thanks for joining
us.
Thank you and welcome everybody. I hope
you're having a good day and I hope
you're looking forward to how to unlock
your agents potential with MCP or the
model context protocol.
Okay, so I'm your presenter today. I'm
Daniela Matthews. I'm a cloud solution
architect at Microsoft. I'm a long-term
Microsofty uh coming up on 13 years now.
and you can find some links to my
socials and my GitHub there on the
slide. So, what we're going to do today
is we'll just start with a quick
overview of agents, the AI foundry agent
service and MCP
and then we'll dive into a demonstration
of all of these working together. I want
to give a big shout out to my colleagues
David Glover, Aaron Pal, and Marlene
Mangami for putting this content
together. Um, it's a really great
workshop that anybody can do. So, I
think we're also lucky enough to have
Aaron and my other colleague Michelle um
moderating today in the chat. So, please
feel free to make comments, bring on the
questions.
All right. So, what is an agent? We're
just going to keep this quite simple.
We're going to think about what
differentiates AI agents from, you know,
just talking to a normal uh large
language model.
There are lots of different definitions
for an agent, but this is the one that
we've settled on. So, agents could be
doing something as simple as data
analysis, or they could be doing
something really complex like onboarding
a new hire.
So, an agent is semiautonomous software
that can be given a goal and will work
to achieve that goal without you knowing
in advance exactly how it's going to do
that or what steps it's going to take.
So I think semi-autonomous is kind of
the flavor that we're talking about
here.
And then how can MCP or model context
protocol how can it help us?
So you know AI agents have been around
for a little while now. They're very
popular and at the moment um you know
they work best when they have access to
several sources of data and information.
But every new integration that you might
want to make with your agent requires
its own custom implementation.
And so this led to you know an explosion
of custom extensions.
So this is really difficult to scale.
It's really difficult to govern and it's
difficult to make secure and depending
on how many integrations you need this
might be like a lot of overhead uh as a
developer. Also, if you're developing
for a company that maybe has lots of
different systems looked after by lots
of different teams, then we're now
introducing like a bit of a people
problem to this as well.
So, MCP helps us standardize our
integrations. You know, we're trying to
make our agents better and context aware
and to deliver all different sorts of
insights. So, MCP really helps us tie
all this together.
It's a little definition for model
context protocol. So MCP is an open
protocol that standardizes how
applications provide discovery actions
and context to large language models.
And this is a really popular term. It's
like USBC but for AI. Um and this
enables business agility because it
gives us more time to build cool stuff
and improve our agents rather than
writing like custom integration code.
So let's have a little look at the AI
agent service in action.
So the Foundry SDK is doing a lot of the
kind of heavy lifting for the agent
itself. So it creates an agent, it
creates a thread which is what happens
when we talk to the agent. So like a
unit of conversation.
It then runs the agent. It just checks
that the status is complete and that no
errors have occurred and then it returns
the output to us um as a response.
So then this agent is supported by a few
different things and you'll see this in
the scenario that we work through. So we
have some instructions. So this gives
our agent like some guard rails to work
within. Also gives it a bit of a
persona. So you know we tell it what
kind of language to use.
Then we have our models. So we have
large language models. Um we're also
using uh text embeddings today which is
for our semantic search. We have other
different models available in AI foundry
as well.
Then we'll have a data source usually if
we want to give us um you know give the
agent an opportunity to give us
information that is specific for our
needs. there's pertinent information for
the insights that we're trying to gain
from the agent
and then we have a whole section of
different large language model tools. So
today we are going to be using MCP
server. So that's one tool and we're
also going to be using the code
interpreter. So that is a separate tool
and then we have a whole other bunch of
different tools. Some of these are you
know just public information like doing
a Bing search. You might have specific
documents in SharePoint that you want
the agent to have access to. So we have
other different um tools that we can
integrate into our service and this can
all be handled within Foundry.
And so what's happening here is we
create a thread um in this case you know
it's tell me the total sales by store
and then the SDK runs the agent which
for us is calling an MCP server which
then queries our Postgress database and
returns our data to the agent to present
to us as the response. So it's just
giving me some uh basic sales
information there.
Then if we go a step further, maybe I
want to see this information as a pie
chart.
So this time we're going to call the
code interpreter tool which is actually
going to use the data that we've already
collected via MCP and then it's going to
use uh Python in this case to go ahead
and create a pie chart for me and send
that back as the response.
So you can see the context has been
carried forward from the first question
to the second question and that's what
we'll demonstrate today.
And then a quick note on observability.
So when it comes to running production
services, we need to be able to make
sure we can monitor them just like any
other application. Um we want to make
sure our AR systems are reliable and
safe and high quality. uh especially if
we're working in the enterprise
and AI foundry gives us some good tools
to achieve this with monitoring and
tracing and if you're already used to
using things in Azure then all of this
will look very familiar to you because
it uses the same uh underlying
architecture.
Okay, so we're going to get straight
into the demo now because uh
demonstrating AI services can be a
little bit challenging and also just to
uh take out some of the like build time
and stuff. I've actually recorded this
as a video, but I will talk you through
all of the steps that I've taken.
So this is a workshop that anyone can
follow. Um we'll have a quick look at
our scenario.
So we are a sales manager at Zava which
is a retail DIY company
and we have stores in Washington state
so in the United States. We specialize
in outdoor equipment, home improvement,
etc. And we as a manager want to analyze
sales data. We want to find trends. We
want to understand customer preferences
and make informed decisions.
So let's have a look at what we'll
learn. We're going to be using the Azure
AI Foundry agent service.
We're going to use MCP which we just had
a little look at.
We will be using a Postgress or Postgres
SQL database. Um this will help us store
our data but also help with our semantic
search. It also gives us some rowle
security so we can protect our data.
And then Azure AI foundry which is the
enterprise ready service that kind of
encapsulates all of this. Um this lets
us manage our models and do cool things
like the tracing security governance and
other observability tools.
And then just to get a bit of a visual
you can see the graphic at the bottom of
the page. So AI foundry wraps around all
of our models in this case and our agent
service. And then we can actually access
different Azure AI services in here as
well. So we have um AI search, the AI
services, Azure machine learning and
content safety. So these might be other
services that you're familiar with. And
then we run our security and governance
across everything.
And because we're in Azure, we can go
all the way from the cloud, you know,
down to the edge with things like Azure
Arc and Foundry Local. And then we can
hook those in with GitHub, Copilot
Studio, and the Foundry SDK.
So a very comprehensive service using
Foundry.
All right, let's jump into the
instructions.
So I am following the self-guided
learners instructions. So these are the
same instructions that you would follow.
And there's a couple of prerequisites
here. So you do need an Azure
subscription which you can get one for
free and a GitHub account also doesn't
cost anything.
And then there's another prerequisite
that I'll cover once we've deployed a
couple of resources. So here is my Azure
subscription
and my GitHub account.
And then you have a couple of choices.
So you can use Python or C. And I'll be
using Python today.
So the easiest way to get going is by
using a GitHub code space. Um this is
like just a little hosted environment
for VS code and it has all the required
extensions. You can just click like I do
a single click and build a code space.
Um but you can run this locally using VS
code with a dev container and docker if
if that's uh your preferred flavor.
So I have already forked this repo into
my own GitHub. So I will open a
workspace from GitHub.
So in here you have all the assets you
need to complete the full scenario and
there's also a very comprehensive readme
that has some further details as well.
Um so when you scroll down you'll see
those
and I'm just going to open a new code
space from here by going to the little
plus sign in the top right and selecting
new code space.
And then I'll pick the right repo. I'll
leave all the defaults. All the settings
are actually contained within the repo.
So I know that it's good to go. And I
will click create code space.
So that's, you know, I didn't do any
other setup. That's literally how easy
it is. Um, and then this code space will
just load up for me in the browser.
And we can see that there's stuff
happening with a little progress note up
here. And then we can see it's building
in the bottom right corner. So while
that's happening, let's have a look at
what we're going to do next.
So once my code is ready, it's very
important to wait for that to complete.
Um then I will do some just some
authentication. So I'm going to use uh
Dev Tunnel
basically because I'm using a local
instance of my MCP server and the
database in this case. I can use Dev
Tunnel to connect it to my Azure
resources.
So I will authenticate with Dev Tunnel
and then I'll authenticate to my Azure
environment and then we can deploy some
resources.
So I'll just switch back to the codeace.
Um the red border is part of the
settings in the codeace itself.
So you'll notice it looks just like VS
Code and in the infra folder we have a
little deployment script. So this is a
quick way for us to deploy our Azure
resources
and once the resources are deployed it
will pop some variables into this
environment file and those variables
will just carry on throughout the
workshop. So we don't need to remember
all of our uh Azure service information.
So just in the terminal here, I'll
complete those two authentications. So
we'll start with the dev tunnel one
and then I just simply follow the
instructions that it gives me.
So, grab this number and pop it into
this website.
So, seamless authentication. This helps
me connect my local resources that are
in the code space itself to my Azure
resources.
And then I'll do the same for Azure. So,
a login. This helps me connect to Azure
so I can do these deployments.
And again, same instructions. Grab the
number, pop it in the website,
and then once that's done, we will come
back to our script
and get that running.
So it's in the infra folder. So I'll
just change directory
and then run the deploy.sh script.
Cool. That's getting started.
So this script does give us some output
as well. We'll see as it's running uh
some of the steps that it's taking.
And thankfully they are successful.
So this is handling the deployment of
the different Azure resources which
we'll have a look at after this is
completed.
And you'll see it's also doing things
like adding the correct RO assignments.
>> Uh Daniela, sorry to interrupt. Um uh we
were wondering in the audience if it
would be possible to zoom in on the
demo. Is that a possibility?
>> Um let me see.
Sorry, hang on. I'll need to get back to
where I was and then I will zoom in.
[clears throat]
>> Thank you so much. We appreciate that.
>> No worries.
Okay.
Is that better?
Thank you. so much. It looks so much
better. Thank you.
>> Okay, no worries.
Um, all right. So, we are deploying
resources, adding our role assignment so
as that we can have the right access to
the resources
and then that has completed
successfully.
So, we'll just skip back to Azure and
have a look at what we've got.
Hopefully, it'll be in the center of the
screen
and it's not.
Okay. So, in my resource group, I've got
my Azure AI foundry service. So, it's
like the top level one. And then I have
an AI foundry project. I have my
application insights, which is for our
monitoring. And then a log analytics
workspace, which holds the data for our
monitoring.
Okay, so let's take a closer look at
Azure AI Foundry.
So this is the landing page for a
Foundry project, but we're going to go
and check that other prerequisite
um that I spoke about before.
So in the management portal, Foundar is
all managed outside of the Azure portal,
we'll have a look at our quotota. And
today we're going to be using global
standard models. So
each model has a token per minute or TPM
quota assigned to it. And we're going to
be using GPT 40 mini. So this model is
cheaper, faster, and lighter on
resources than GPT40. So it's a good
choice for our scenario.
And you can see here I've allocated
120,000 of my 200,000 tokens. So if
you're doing this exercise yourself and
your deployment doesn't doesn't complete
successfully, um just check that you
have enough quota in here
to allocate 120,000.
And then we'll just have a quick look at
the landing page for the project itself.
So we have some project details on the
top and then some good links to help you
get started a bit further down the page.
And then in our model catalog, you'll
see that there are tons of different
models available to help our agents. So,
you know, we're using a kind of a mini
model today because it's fast. If you
want to do something really ownorous,
you might choose a bigger model.
Um and then you can see down the side
here we have other sections for our
agents templates doing fine-tuning on
models, our tracing and monitoring
and then a whole section dedicated to
governance and security and we can also
access our Azure Open AI um other
services in here as well.
So this is where we can see the models
that we've allocated to this project in
particular. So this was done via that
deployment script.
We have our GPT 40 mini and our text
embeddings with 120
uh,000 TPM added to each.
And then let's have a look at how this
works. So we have our foundry agent
service, our MCP server and our
Postgress database. Um,
and so we're going to run our MCP server
and Postgress locally.
And then with the code itself, we have
our web chat app. We have an agent
manager and an MCP server. And you can
see that we've separated some of these
concerns. So we have an app.py PI uh
file which we'll have a look at in a
moment and then a sales analysis.py file
and then we have our Azure resources as
well.
And then just to recap on the benefits
of MCP. So it's interoperable.
Um it has security hooks. It's reusable
and it introduces a simplicity that I
honestly think that um working with you
know agent to agent communications
didn't have before. So makes things very
fast for us.
And then just a little bit about how the
dev tunnel works. So because we're
running things locally in our code space
um we are going to authenticate via dev
tunnel which then connects to a specific
public endpoint in Azure so as that we
can have our local resources talking to
our Azure resources uh seamlessly and we
don't have to like keep logging in every
time.
Okay. So next we're just going to have a
little look at some points about the
Azure AI Foundry Agent Service. It's a
bit of a mouthful. So the agent service
has several benefits. Um I hope that
maybe you've picked up a few as we've
looked at the different screens already,
but it does have rapid deployment. Um
it's scalable. It supports custom
integrations.
And today we're going to look at just
the local MCP server integration, but it
also includes a lot of the other
services that you can integrate. So
things like fabric, shareepoint, Azure
storage,
and it has conversation state
management. So it can carry the context
through from conversation to
conversation and agent to agent.
All right, let's get into the labs.
So I will make this a little bit bigger
for you.
Okay. So, the first thing we're going to
do is open the specific workspace that
is for the Python version of these labs.
So, I'm going to do that by going uh
open file, open workspace from file and
I'll get into the right directory in the
VS code folder and open up the Python
workspace. And then there's another
workspace for the C um version.
And so what we were just in was a
specific workspace for that entire
workshop. And then this workspace is
just for the agent, the MCP server, and
the database.
Just move this over so you can see we've
got our environment file with all of our
variables in here. and then all of the
other information that we need to run
through the scenario using Python.
And by default, it also opens up those
important files. So we have our app.py
file. So this is the main entry point
for the scenario has our code uh for the
application and the agent. And then
another Python file is our sales
analysis.py.
So this has more information about our
database, our MCP server and the schema.
So the schema helps the agent um like do
consistent data retrieval so as we can
get consistent output.
Then we have our two instructions text
files. So these files are how we get our
agent to behave and also give it some
context and some rules um that we
explicitly write. So you can see in here
we have the agent's role. So it's a
sales analysis agent
and we are telling it to only use
verified output. Um don't just assume
anything. Don't go and make anything up.
Uh because you've probably found
yourself that agents especially just
plain large language models can give you
interesting output and inconsistent
output. So, we don't want it to make
anything up. Um, it also has some
instructions about how to interact with
the database and we explicitly say like
if you're unsure about anything, don't
just make it up. Um, and then we have
some query constraints.
Then the code interpreter has
instructions for creating different
types of charts. So we have pie bar or
line charts and then some formatting
instructions.
If the user starts the chat and is
unsure, we'll we'll offer some
suggestions so as that they can get the
conversation going.
And then we're also telling the agent to
stay on track um and just redirect the
user if they get hostile or upset. So
we're giving the agent a good persona to
work with here.
And then at the bottom there are some
final reminders. So uh it also has one
about the date. So respecting the FY
starting January 1.
But I happen to know that actually
Zava's financial year should start on
July 1, not January 1. So let's fix
that.
And I will scroll back up to the first
block of instructions. And then I'll add
a line here to specify
when the financial year should start and
also which months are in which quarter.
And then I'll also amend the reminder
down here to change from January 1 to
July 1. And I will save that.
Cool.
Okay. So then in the app.py file, you
can see at the top there the lines for
the instructions files and the code
interpreter one is already active. And
then on lines 68 and 69 there is some
commented code. Um so I'll just
uncomment that.
So is that now these two tool sets. So
the code interpreter and the MCP server
are now active in the agent as well.
And I'll save that
and then we're ready to give it a test.
So for testing I'm just going to use VS
Code debug option and from the list I'll
choose the agent and MCP choice
and I'll hit debug and let the
application start up.
Okay, so I've just opened this in the
code space browser. So just locally and
you can see my agent is ready and it
said I can start chatting.
So I'm a Zava manager and I want to see
the sales by store for the financial
year.
So we can see a few little actions
happening down in the terminal
and it's responded with the data which
is just straight out of the database.
And now I'd like to see that as a pie
chart.
So now the code interpreter is using the
data from the previous question and
Python to create a pie chart for me.
Cool. So now I have
a pie chart of the financial year sales
by store.
Cool. So now I'm going to ask it to get
the data and create the chart in just a
single request. So which reg regions
have sales above or below the average?
And I want a bar chart with a deviation
from the average.
So, first it needs to know what the
average is. Um, and we can see that the
average sales are around 872,000.
And then it will give me
the bar chart with the deviation that is
above or below that for each store.
So, that's returning good sales data.
So, how about I ask it a product
question? What 18 amp circuit breakers
do we sell?
Okay, so it can't find any exact
matches. Um, but we can improve this
response by enabling semantic search. So
I will stop the debug session and then
we'll make a few changes in the code. So
we can do that.
So in my sales analysis.py PI file. Um,
we can see our MCP and database
information. So, here around line 170,
we can see semantic search for products.
So, I'm going to uncomment this one and
make it active.
And then the description says, describe
the product you're looking for and some
examples. So, let's have a look at our
instructions
for MCP with semantic search.
Sorry.
Um,
sorry, I tried to zoom it in for you
again and now I have lost my spot in the
video.
Okay, we're close.
Okay.
All right. So, unfortunately, I'm not
going to try and zoom it again just in
case that happens. So, I do apologize.
You might be able to zoom on your own
screen or I'm hoping that you might be
able to just read this, but I'll give
you lots of details so as that hopefully
it's clear.
All right. So, where we're up to is we
are going to uh adjust our
application to include semantic search.
Okay. So in our instructions,
we do have specific instructions in here
for product resolution. So we just asked
a product related question.
Um you can also see in uh the top block
here, we have instructions to use a
friendly tone with a smiley face.
Um and that we should use semantic
search for products when the product
description is vague, generic, or
abbreviated.
So let's just make sure that these
instructions are now active in our
app.py file.
So here at line 39, we have our two
instructions files. So we used our code
interpreter already for charts. And I
will uncomment this one to make our
semantic search active as well.
All right, I'll save that and I'll hit
debug so we can test it again.
Just wait for that to start up. All
white text is not too bad of a sign.
And then I'll just run it again locally
here in the code space.
And I'll ask the same question. So what
18 amp circuit breakers do we sell?
and it will have a little think.
Cool. So now, even though it doesn't
have an exact match, um, it has
understood my vague question about 18
amp circuit breakers and it's giving me
some other suggestions that I might find
interesting.
And then finally, I'm going to ask a big
question. and I would like an executive
report for these circuit breakers.
So again, it's carrying the context
forward from the question that I just
asked even though it didn't have an
exact match in that last question.
And then it has created a beautiful
executive report.
So, it's given me a report overview,
some sales performance data,
some key insights into the data, and a
list of recommendations to improve the
sales.
So, now we've got an agent, we've got
our code interpreter tool working, we've
got our MCP working, and I will stop
debugging this session, and I'll just do
a little recap of what we did in this
workspace.
So we have our main application file in
app.py
and in here we made everything active.
Um we have our two different sets of
instructions for our two different tools
and then we have our MCP and database
file. And so we made a few little
changes to our instructions and then we
just enabled these tool sets in the
app.py file.
And of course, all of this is available
in the GitHub repo. So you can have a
closer look at all of this code.
And then a little recap. So what we
looked at today was the AI foundry agent
service the mouthful our local MCP
server
our Postgress database and AI foundry
which is the top level and then speaking
of AI foundry let's go and have a look
at those observability features.
So here I am in the Foundry portal
and we have
a special section just for tracing.
And so here you can see all the calls
that I've made uh and the different
requests that go with them. So this is
really good for troubleshooting. Uh you
can get into the details of what's
happening behind the scenes. And if I
just expand one of these,
you can see our post requests in here.
You can also see how long calls take and
how many tokens were used. So if your
agent is taking a long time to respond,
you can come and check what's happening
in here.
And then I will switch to the monitoring
page.
And this is where we can see things from
more of an application perspective. So
this has our application insights which
was one of our Azure resources that we
deployed. That's what's doing all of the
heavy lifting for this monitoring data.
And of course our data is stored in our
log analytics workspace.
So here I have things like my token
usage, my inference calls, um, and
duration all on a graph. And, you know,
I can also change the time frame of the
monitoring that's in here. So maybe I
want to see the last seven days. I can
do that as well.
And then we have one more tab in here,
which is the resource usage.
So we can see the information for things
like input versus output uh number of
requests and so on again in a very nice
uh actionable insights type of graph.
And then if I had some other models I
could look at all of these by model but
we just have our GPT4 mini in here.
And then I might want to investigate
something further. So I can actually do
some marrying up between the monitoring
and the tracing. So I do have some peaks
on my graph. You know, maybe they're
higher than I expected.
Um, and so I can flick back to tracing
and then have a look at things in a
little bit more detail.
So if I expand this out, I can view the
details of each of these,
see what information was used in each of
the requests,
and also how it relates specifically to
the thread that I started, so the
questions that I asked. Um,
and then we can see the actual API call
that was made.
So really good information in here for
troubleshooting or understanding how an
agent is working.
And I can quickly uh navigate through
each request by just using the next um
the next trace button on the top right.
And then just a few more details in
here.
So that's just a very kind of high level
and uh basic view of the monitoring and
tracing. Um but it's something that
means that you don't have to build that
yourself. It just relies on the foundry
agent service and uh the Azure
application insights.
So in AI Foundry, we added our models to
our project
and we also had a look at the quota and
we allocated 120,000 TPM to each one. So
we used our GPT4 O mini and our text
embeddings. And in here we can also see
what other quota we have allocated for
all the other models.
And so that is it for today's demo. Um,
thank you so much for being here and I
do apologize if you weren't able to see
everything clearly, but I do suggest
that you go and have a go at this
workshop yourself. Um, there's even more
information in here about all of the
data that's in the database. So, you can
create your own questions. You might
want to change things up a bit. Um, as
you can see, it doesn't take very long
and doesn't take a lot of effort. So
everything's kind of scripted for you
and it's a really clear workshop with
clear instructions to follow. So I hope
you learned something um even if it was
something very little that you can go
away and kind of broaden your knowledge
and discuss with other people and other
teammates and have a go at building
something yourself.
So that is it for our demo.
would love to get your feedback for
today's session. Um the reactor team are
doing a wonderful job and there are also
other sessions. So I'll leave this QR
code up for just a minute uh so you can
get to the form to give feedback and
then on the next slide we will look at
the code for uh accessing the other
sessions that are already happening and
about to happen.
So on the Microsoft AI genius series
page um you can see uh other esteemed
colleagues and MVPs presenting other
sessions. So really cool stuff out there
at the moment. Got another colleague
coming up with another session for part
of series 2. So that's exciting.
And then finally, who doesn't love a
badge? So, we have an exclusive AI
Genius certified badge. Um, if you
follow this QR code, you can get the
badge and then you can post it to your
LinkedIn or other socials or you could
print it out and pin it on your shirt
and wear it. Um, I'm really glad that
you were able to join us today and I
hope you enjoy this session and future
sessions.
Thank you.
Welcome to Season 2 of Microsoft AI Genius! In this kickoff episode, Daniella Mathews, Cloud Solution Architect at Microsoft, walks you through how to supercharge your AI agents using the Model Context Protocol (MCP) and the Azure AI Foundry Agent Service. From understanding what makes an AI agent truly “agentic” to deploying a fully functional sales analysis agent using MCP, Daniella delivers a hands-on, developer-friendly walkthrough packed with insights, demos, and practical tips. ⏱️ Chapters: 00:00 - Welcome and intro by Daniella Mathews 03:01 - What is an AI agent? 04:36 - Why MCP matters for scalable agent integration 06:18 - MCP explained: “USB-C for AI” 07:00 - AI Foundry Agent Service overview 08:18 - Demo: Agent + MCP + PostgreSQL in action 10:01 - Scenario: Sales manager at Zava 13:10 - Setting up your dev environment with GitHub Codespaces 17:30 - Deploying Azure resources with scripts 20:10 - Exploring Azure AI Foundry project and quotas 23:08 - Architecture: Agent, MCP server, and database 25:00 - Benefits of MCP and dev tunnel setup 28:31 - Writing agent instructions and enabling tools 30:28 - Live test: Sales data queries and chart generation 33:15 - Enabling semantic search for product queries 37:15 - Executive report generation with context carryover 39:08 - Observability: Tracing and monitoring in Foundry 43:38 - Recap and how to try the workshop yourself 45:26 - Feedback, upcoming sessions, and AI Genius badge 📘 What You’ll Learn: • What makes AI agents different from traditional LLMs • How to use Model Context Protocol (MCP) to simplify agent integrations • How to deploy and test agents using Azure AI Foundry Agent Service • How to connect agents to PostgreSQL and use semantic search • How to generate visualizations with the code interpreter tool • How to monitor and trace agent behavior using Azure Application Insights • Best practices for writing agent instructions and managing context [eventID:26334]