Loading video player...
Hello everyone. This is a bonus video in
Microsoft agent framework series. If you
haven't seen other videos, I would
highly recommend to go through them to
get a good or deep understanding of
Microsoft agent framework.
Now in this video, we are going to talk
about how to use A2A which is agent to
agent communication and MCP which is
model context protocol along with
Microsoft agent framework. So if you
have ever wondered how agents talk to
each other and how they connect to the
different tools or the external system
then this video will make everything
clear for you. We will cover what MCP
is, what A2A is, why they are so
important, how authentication and
security are handled and then we'll
discuss two different scenarios where
you can use both the A2A as well as MCP
together. one we'll deploy locally and
another one in Azure AI foundry.
So let's start with MCP model context
protocol. So MCP is basically a standard
that allows your agent to connect and
use the external tools or the data
sources in a consistent way. So instead
of just writing the custom integration
for every new API or the system, you can
define a tool as an MCP server and any
agent that understand the MCP can use
it.
Think of MCP as a common language
between your agent and the external
world. It defines how a tool describes
its capability, how an agents connect to
it and how they exchange the
information. For example, your tool
could provide weather data or it could
do a file management API or file
management task or it could connect to
the Microsoft learn documentation. So as
long as it supports MCP, your agent can
talk to it directly. No custom logic, no
extra coding for each connect. This
makes it easy to build a modular and
reusable agents that can use the
different tools such as plugging in the
new MCP servers. Now let's talk about
the A2A which stands for agentto agent
communication.
While MCP is for connecting the agent to
the external tools, A2A is for
connecting the agents to each other. It
defines how two or more agents can
exchange the task or the messages,
delegate work and coordinate their
responses. So imagine if you have one
agent that's good at research, one is
good in execution and another one is
good in coordination. Then the
coordinator receives a task or the
request, passes it to the request agent
using the A2A, gets the result, then
send it to the executor agent to
complete the job. That's exactly what
A2A enables. Collaboration between
multiple agents that each have their own
skills and roles. And the best part is
that you can use the Microsoft agent
framework and you don't have to create a
custom communication logic. It supports
A2A natively. Now, why are MCP and A2A
very important? Because together they
make your AI system more modular,
intelligent, and scalable. MCP allows
your agent to use the real world data
and the tools. It gives them the ability
to take action. A2A allows them to work
together to divide the complex problem
into the smaller task. With both of
these combined, you can design proper
multi- aent workflow where the different
agents handle specific jobs, share
context and finally produce a combined
output. This is exactly how
enterprisegrade AI systems are built
using smaller specialized agent instead
of using one big monolithic model doing
everything all together.
Now just a quick note on the security
and authentication. When you're using
the Microsoft agent framework locally,
you can use the entra ID authentication
or the API keys for the MCP tools for
A2A communication between the agents.
The framework supports tokenbased
authentication and the secure local
communication.
But when you move to the Azure AI
Foundry agent service, the good news is
all this is handled for you. It's a
fully managed service and when you
attach a tool to the Azure AI Foundry
agent, the authentication, identity, and
encryption are automatically managed by
Azure. And A2A communication between the
agents in the Foundry is already secure
by default. You don't have to set up the
certificates or the tokens manually.
It's all handled by the platform.
Now that you understand the concept,
let's talk about the two different ways
you can use the MCPN A2A together with
the Microsoft agent framework. The first
way is when you want to build everything
locally. All the agents, all the tools
and the communication run within your
application itself. The second way is
when you use the Azure AI Foundry agent
service to host your agents and the
tools in the cloud. Both approaches use
the A2A and MCP but in a different ways.
Let's look at them one by one. So in the
first scenario we will build everything
locally. Here we'll use the Microsoft
agent framework but all the agents are
custom Python scripts running on your
own system or the application. You will
directly use the Azure OpenAI model from
Azure AI foundry for reasoning and the
language understanding. You will create
the multiple local agents. for example,
a coordinator agent, a research agent
and an executor agent. Then you attach
the local MCP tools like a weather
server or a file operation server. These
tools expose simple MCP endpoints such
as how to get weather or how to write to
a file. Now these agents talk to each
other using A2A messages. The
coordinators send the task to the
research agent through A2A. The research
agent fetches information from the
weather MCP. Then it sends the data back
to the coordinator who delegates it to
the executor agent which uses the file
MCP to save it locally. Everything the
agents MCP tools and A2A communication
happens locally on your machine. This
setup is best for the scenarios where
you want to manage everything custom and
don't want to pay for the Azure AI
foundry agent service.
So let's check this in lab. I'm in Azure
portal now and for this demo I have
created a new Azure AI foundry and there
is a project which is created along with
it. So when you open the portal from
here which I have already opened here
there is a project which I have created
and what I have done here is nothing
special just in the models I have
deployed one GPT40 model
and if you'll go to the agents there is
one default agent which is created once
you click on the agents otherwise we are
not going to for this scenario scenario
one we are not going to use these agents
we just going to use the open AI direct
endpoint
and API key and then we'll use it here.
If you look at the agents, there is only
one. Yep. Which is default when you just
click on it. Okay. So now let's come to
the VS code and I have
created the code with two different
scenarios which you can see clearly and
I'll upload this code in the GitHub. So
if I'll talk about the scenario one, if
I'll click on MCP servers, there are two
MCP servers which are defined. So one is
the weather server. Let's scroll up. So
what it's doing is it's using the open
mio API.
So if you look here, so this is the
website openme.com. It provides the free
weather API. So I'm using it here. You
can just directly use this code.
So I'm using this API and I'm defining
everything. I'm defining all the
functions like how to make this API
request. Then how to get the location,
geocode city, how the API can be
triggered or how it's used. So all this
is defined here. Then I have defined the
MCP configuration, the MCP tool, how the
information on the MCP has to be
provided and how you will get the
response to it. So if you'll go in the
end, it uses different icons and
everything and this is how you will will
get this response.
So this is weather server, MCP server
which we have defined which will run
locally. Another one is the file
operation server. So here it's just the
Windows commands which we are doing like
get as path of the file then read file,
write file,
delete or list all the file all the
operations are defined and because we
have to use the MCP tool so these has to
be done through MCP. This is defined as
an MCP server and how you will get a
response. So this is a standardized way
how it's defined now. So if someone with
MCP client tries to connect to it, it
will do all the operations and provide
the response in exactly the same way. So
if there are any changes in the API for
example, if weather API they changes
something. So in that case you have to
just make the changes to the weather
server or MCP server. You don't have to
change all the application which are
using the API. For example, API version
2 is there. Then you have to make the
changes in the version. In that case,
all your application will not be
impacted. You have to just change the
MCP server and everything else will
remain same. So this is the benefit.
Now I have defined three agents. So
first one is the research agent. If
you'll go up. So what it's doing is it's
using the MCP weather server and using
the Azure OpenAI endpoint. It's creating
an agent locally. So you can see the
system message that it has to use the
MCP tool to get the information about
the weather and then how it has to
process the information the request
everything is defined here. How it has
to call the weather MCP. It's an MCP
client which is defined here for this
and MCP tool will be integrated with it.
And this is all the information how it
has to send the sample information and
how it has to communicate with the other
agents. All this is defined. Then the
coordinator agent which is as the name
suggests coordinating between the tasks.
So now based on the system instructions
it has to use the research agent and the
executor agent. So research agent is for
gathering the weather information.
Exeutor agent is for file operations
which I'll show. We have defined in the
system message how it has to use it then
it has to create a workflow how the
process how it has to process request
how it has to send and which workflow
it's planning using the Microsoft agent
framework how it's defining all those so
I've already defined the workflow in
Microsoft agent framework in another
video you can check that and then it's
telling okay for weather use this
parameters for for exeutor workflow
these are the different parameters and
it's sending the request to others. Now
the third one is the executor one. It's
basically using the Windows operations
which includes process execution. This
is process execution request. So using
the Azure OpenAI model, it's creating
the agent locally and then defining
which are the different MCP tools it can
invoke and because it's MCP client for
it and it will using the MCP client
it'll connect to the MCP server file
operation server and then how it has to
execute the operation and everything
everything is defined here. We have
three agents which are created and
finally there is one other Python file
which collaborates or put together
everything. So now it's importing the
different agents
and initiating them. So it's creating
the different agents initially. As soon
as you run the file, it will create then
it defines the different workflow multi-
aent workflow. It's defining that how
different agents are invoked and how you
would get a response uh how the users
like the text which you'll get on the
screen or the terminal and how you
should get a response and if you'll get
an idea. So what I'll do is it's easy to
demonstrate.
I will open the terminal.
So right now venv I have created virtual
environment. It's activated
automatically. So what you have to do is
cd
scenario
one. You have to go inside then run the
python file scenario one.
And as soon as you run what it's doing
is it's creating the local agents.
So it created three local agents
coordinator, executor and research and
these agents are automatically attached
with the MCP client and the MCP server
is also running locally. So now to MCP
server on 8,0001 and 8,0002 port they
are running and these three agents are
created. Now it's giving example. You
can write all this but wouldn't provide
the weather of Sydney and save it in a
file. So now let's check workflow plan.
Uh so first the coordinator agent then
it has defined the workflow plan. It
should be research weather information
through the research agent. Save data to
a file through the executor agent. So
this decision is taken by the
coordinator agent. Now first research
agent what it's doing is sending the
message A2A message. Okay research agent
this is Sydney and Australia go find
this weather information. So now because
research agent uses uh the MCP tool. So
it's using the API which in the back end
uses the M um API of openu. So it has
retrieved the real information. Then it
has sent okay I have got this
information. Then it's setting sending
the executor agent. Okay. Now you save
the data to the file and this is write
file operation which is the MCP tool
which you have to use and the file name
and all those details are provided. Then
the executor agent with all the
information which it got from the
research agent it's not well formatted
but it will be and then it's saving to
the file write file
and then it's telling okay yeah it's
done. Then it's saying okay workflow
completed all the steps done and this is
the final output. So it's just showing
in a well structured way about the
temperature, humidity and everything
which it got from uh weather API and
then it has saved the file. So it's the
weather report.txt which it has saved
and which has as it's shown here it's
well structured data. It's saved in a
file and which is the task. So summary
it got uh the weather data retrieved and
then saved in a file.
So that is the first scenario where you
have just used the Azure OpenAI model
created the agents locally. You ran the
MCP servers locally. You can run the
public app publicly also in different
app service or container service. If you
have the local MCP servers but nowadays
a lot of the organization or the third
party vendors they are hosting their
own. For example, GitHub they have their
own MCP server. You have to just provide
credentials and everything and then you
authenticate to it and then use their
remote MCP server directly or you can
host them locally. It's completely your
call if you don't want to connect it
through public. So you can host them
locally. Now agents are created locally.
Uh MCP server is running locally. MCP
connections A2A protocol everything is
happening locally. Now you don't have
any dependency on Azure AI agent service
which charges you based on the usage. So
this is one scenario. Now let's talk
about the second scenario. Here we are
going to use the Azure AI foundry agent
service along with the Microsoft agent
framework. So in this case both your
agents will be hosted in the cloud using
the Azure AI foundry. You still define
your agent logic using Microsoft agent
framework but now the deployment
networking security are all handled by
Azure. You can attach MCP tools directly
inside the foundry. For example, the
Microsoft learn MCP server which is a
remote MCP server which lets your agent
search and read the Microsoft
documentation. So when you attach a tool
like this, Azure automatically manages
the credentials, token and permission.
Now the agents communicate with each
other using A2A. But here is the
difference. Inside Azure AI foundry, the
A2A capabilities are limited to the
built-in workflow which is very new
actually. You can't design complex A2A
flows directly inside the foundry.
That's where the Microsoft agent
framework comes in. So using the MAF or
Microsoft agent framework, you can
define your own A2A workflows, control
your agent, exchange information and
design the orchestration logic. Then the
Azure AI foundry runs these Microsoft
agent framework, define agents in a
managed, secure and scalable
environment. So this second scenario
gives you the best of both world. The
power of the multi-agent framework for
defining the custom logic and A2A
orchestration and the reliability of
Azure AI foundry for managing and
securing everything. Let's quickly check
this in lab. Now let's go to scenario
two and there is only single file which
I have created. So in this file what we
are doing is we are just importing
different variables or the packages and
we are connecting directly to Azure AI
project which is Azure AI foundry
project as you can see here because we
are using the MCP tool. So when we have
to attach the MCP tool we have to submit
the approval and then approve it and
this is required in Azure AI agent
service. However, in the portal, there
is no option to attach the MCP server.
You can attach different tools, but not
the MCP server or MCP client right now.
So, the only way is either the net or
the Python. So, I'm using the Python
way. Once you will attach it, you have
to approve it. So, this is the flow.
Then it's configuration project endpoint
and everything where it'll use the Azure
AI agent service. There are a lot of
videos which I have created for this.
And then we are using the Microsoft
learn MCP server which you can see here.
And if you'll go here. So this is in the
Microsoft documentation MCP. This is the
Microsoft learn MCP server. And it's a
remote MCP server which means it's
already hosted by Microsoft. You have to
just use the MCP server using this link
or the URL.
Now coming back. So it's defined. We are
defining different inputs from the user
so that to make it more interactive and
submit the tool approvals and then
creating the different agents like
research agent. It's creating the
research agent. Then it's attaching the
tool MCP server tool to it. Then it's
creating the another agent which is
exeutor agent which will do the summary
and everything. And then the third
collaborator agent which will
collaborate okay to whom to send whether
it's a research agent or the executor
agent automatically because we are
making it very simple just to show that
MCP can be attached here and two
different agents can communicate with
each other though they can be attached
directly into the collaborator agent
into the portal itself Azure AI foundry
portal or agent service portal but we
want to show using the Microsoft agent
framework where we'll use the two
different agents
attach MCP server to one and then we'll
use the A2A communication channel using
the Microsoft agent framework and ask
the coordinator agent to define the
workflow so that we can have the custom
workflows also.
So the workflow is already defined and
these tools will be pulled all those
data information instead of the multiple
files I have created in a single file
and how research agents and and how the
A2A response and everything it's all
defined in single file because we
understand these A2A things we have
already defined in the scenario one. So
it's similar to that. Now before we run
this I want to quickly show one thing.
So let's go to the agents. There is a
default agent. Let's use this try in the
playground
and we want to see whether we are going
to get any different information for
Microsoft learn or the Microsoft
documentation. Let's check this. What is
your
knowledge
cutoff
date should be around 2023.
Yeah, October 2023. And just few months
back or one or two months back the MCP
server support is provided into API
management
and it has started providing different
version two tiers. If we'll scroll down
so availability in preview it is MCP
servers in API management are available
in the following service tiers classic
tiers which is basic standard and
premium and now the version two tiers.
This is new information you can't find
easily. Let's quickly copy paste this
and ask. I don't have to rephrase the
question but if I'll just ask this it
should not provide the version two
information because the knowledge cutoff
date is 2023 and it's yeah released few
weeks before or the months before. So as
you can see
in fact MCP server it's considering as
multicloud and private because MCP is a
very latest protocol which is defined by
anthropic. So yep it's it's new
information it's not here. So now what
we going to do is we are going to create
an agent. Right now there is only one
agent you have seen. We will create it
through Microsoft agent framework and
Microsoft ai agent service or foundry
agent service and then attach the MCP
tool and then see whether we get that
right information or not.
So let's run this
scenario 2 Python
interactive
demo.
Okay. So now ready to start. So what
it's going to do is create the three
agents step by step.
So it's saying we are going to create a
research agent which will use the
Microsoft documentation MCP tools. Okay,
let's do it.
Okay, perfect. As you can see the agent
ID also I know it's created but let me
quickly cross check
and you can see the MCP it's attached
also.
Perfect. Now executor agent it's a
direct just it's nothing special just an
agent with the name exeutor it's
created. Let's quickly check.
Yep perfect. If you'll scroll right
there are no tools attached to it. And
then the third will be coordinator agent
which is going to use both. And it's
created all three agents are created.
Right now no local agents are getting
created. All the agents are created in
Azure AI foundry agent service and will
be used from there. Now uh ready to
demonstrate? Yes. Let's ask a question.
Let's ask the same question.
No need to rephrase. just ask this
question here. So this is the question
which they have received and let's I'm
just making it more interactive so that
you can see what's happening otherwise
it'll just work very quickly. So user to
the coordinator agent. So it's sending
the content. So it's passed to the
coordinator agent.
Then the coordinator agent,
please search Microsoft learn
documentation. Then it's telling the
research agent, okay, check the
documentation and provide this
information to me. And then it's using
the MCP tool and it will pull that
information.
Okay, it has found one tool. It has done
the search. Microsoft doc search. Let's
uh get the response from the agent. Then
it has provided the response. Now we are
sending to executor to summarize it a
little bit.
The response is big. However, it has
provided to the coordinator agent. So
please format and summarize this
information. Now coordinator has asked
this executor.
Now executor will summarize it. Okay,
this is the summary information given it
back and now we will see it. So now it's
saying the summarized information is yes
there are classic tiers these three and
they are version two tiers. You can see
the difference. Now if you ask this
information to any agent if they are not
connected to internet we will get it
wrong. So now we are not doing internet
search. We are just doing the MCP tool
call using the MCP server which is
Microsoft learn server and we are
getting the right information. Now
let's quickly compare both setups. In
local scenario you use Azure OpenAI
directly create the agents locally
attach local MCP server tools like the
weather or the file server and all the
communication between the agents happen
within your application itself. In Azure
AI foundry scenario, the agents are
hosted inside the Azure. The MCP tools
like Microsoft learns are attached
within the foundry itself and all the
communication and authentication are
managed by Azure. You still use
Microsoft agent framework to control the
A2A workflow but everything runs on a
managed service. The first setup is
perfect for the scenarios where you want
to manage everything custom and you
don't want to depend on Azure AI agent
service. And the second one is the
production ready and enterpriseg grade
setup which is completely managed by
Azure. And you have limited flexibility
but using the Microsoft agent framework
you can take care of that too. So that's
the overview of how you can use the A2
and MCP together with the Microsoft
agent framework you now understand what
they are, why they matter and how
authentication works and how they fit
for both the local and the Azure AI
foundry environment. So that's all I
wanted to show in this video. I hope you
liked it. Please like and subscribe.
In this bonus video of the Microsoft Agent Framework series, we explore how to use A2A (Agent-to-Agent communication) and MCP (Model Context Protocol) to create powerful multi-agent workflows — both locally and in Azure AI Foundry. You’ll learn: ✅ What MCP is and how it standardises communication between agents and external tools ✅ What A2A is and how agents collaborate, share context, and divide tasks ✅ How authentication and security are managed locally and in Azure AI Foundry ✅ Two real-world scenarios: one local setup using Azure OpenAI and another using Azure AI Foundry Agent Service ✅ How to design modular and scalable AI systems using Microsoft Agent Framework Whether you’re building agents that talk to APIs, connect to Microsoft Learn, or manage workflow automation, this video will help you understand how A2A and MCP work together to make your AI projects smarter and more flexible. 📘 Watch the full Microsoft Agent Framework playlist to get a complete understanding of agent design, orchestration, and deployment in Azure AI Foundry. 💬 Don’t forget to like, comment, and subscribe for more tutorials on Azure AI, multi-agent systems, and advanced AI development. https://www.youtube.com/playlist?list=PLDkX8OJhBFVuBE19vuAFPnCe-MCjYstCW Github - https://github.com/Shailender-Youtube/Microsoft-Agent-Framework-Series