Loading video player...
OK everyone, welcome to day 2 of Ignite and AI
powered automation and multi agent orchestration in Microsoft Foundry.
My name is Sean Henry.
I've got my colleagues Tina and Mark who are going
to come up shortly, but let's get started everybody.
I hope everyone's in the audience here.
Make sure you got your headphones on and 3D glasses
are under the chairs.
No, no 3D glasses.
OK, no 3D glasses this time.
I can't hear you guys laugh.
Let's get started.
OK, so 2025, the year of the year of the
agent, I was, I was up here at Ignite last
year and, you know, I was telling our story about
how we believe at Microsoft agents are the future.
They're how we're going to get LLMS and AI models
to act and reason and take and, and, and integrate
with our, our existing systems.
And you know, in in just a course of the
year, we've seen millions of agents being deployed into Foundry.
Over 25,000 organizations are building with Foundry, with Autogen, with
Symantec kernel, with our new Microsoft agent framework.
And they've been doing that by some of the things
we're going to talk about today to help you build
your agents, deploy your agents and operate your agents in
Microsoft Foundry.
Today we'll look at our new unified Microsoft agent framework.
We'll look at multi agent workflows and some of the
visual tools you can use to help you build your
multi agent systems.
We'll look at deploying and evaluating your agents all in
one place in Microsoft Foundry.
And we'll talk about how to customize your agent systems
with UI channels, getting them deployed into M365.
And then we'll look at how you can monitor your
agents, how you can do compliance and governance on top
of your agent systems.
And we've been working with these thousands of customers and,
and a lot of a lot of partners at Microsoft
over the course of this year to build multi agent
systems for scenarios like customer support and sales assistance, internal
productivity, knowledge assessments, knowledge assistance and document research.
And one of the partners we've been working with closely
is is BMW.
So I'd love to just kick this off by bringing
Christophe Gephardt up here.
And let's let's take a look at what BMW has
been doing.
Come on up, Christophe.
BMWs are built with quality and reliability that people trust.
Our engineers integrate every car with cutting edge digital features
to help keep drivers safe on the road while delivering
world class performance.
Testing of all functions plays an important role in.
The past We used hard drives in our development cars
to store test data.
Developers had to wait at least a day to get
fresh data for analysis.
We build the mobile data recorder that's integrated into the
car's digital nervous system.
This IoT device transmits data in real time.
For our time series analytics, Azure Data Explorer was the
only way to go.
Now engineers can chat with interface using natural language through
Azure Open AI Service.
DMDR Co pilots front end and back end are managed
by Azure Kubernetes Service.
Azure Database for post dress stores, copilot conversations and feedback.
Azure is the turbocharger for delivering the right data to
the right person on a large scale.
Data analysis went from days to minutes with speeding time
to market even as the number of features increases dramatically.
MBR powered by Microsoft Azure ensures BMW reliability and quality
long before the cars hit the road.
That was great.
Was that you driving around in the snow at the
beginning with the drifting?
No.
No, actually there were some.
Colleagues, you have some pros doing that.
OK Drive.
Yeah.
So since that video came out, Christophe, what what have
you been working on and and and what improvements have
you made since?
Then so, so the MDR ecosystem has evolved in in
hardware and software.
On the hardware side, we've introduced a more powerful Linux
based device in vehicle device that is now part of
the core vehicle architecture.
This unlocks a new level of data integration.
MDR is no longer just a passive data logger.
It actively interacts with control units to capture maximum data.
And here is where it gets exciting.
These updates allowed us to move beyond democratizing data.
We are democratizing data analysis.
Our MDR Multi agent let's engineers ask questions and get
data analysis in minutes.
No more endless manual search.
So you had moved all your MDR data and you
modernized it by putting on Azure.
What did you see that made you decide that maybe
a multi agent system was better than a single agent
system?
Well, the challenge was never data volume.
Our development fleet generates more than enough data.
The bottleneck was data preprocessing, turning raw, diverse data into
a clean structure.
Our single agent system made data access easier, but as
questions grew more complex, we hit its limits.
That's why we move to a multi agent approach.
Specialized agents handle retrieval, preprocessing, and analysis.
This modular approach scales easily.
We can always add new agents for new data sources
or analytics methods.
Now engineers can ask more sophisticated questions and get faster,
more accurate answers.
Our teams can experiment faster and iterate quickly.
So how about you walk us through your your architecture?
You have an orchestrator agent that's coordinating everything.
You have a data analysis agent, you have a data
retrieval agent that are working together, right?
So at the the center is the orchestrator agent.
It manages the entire workflow.
When an engineer submits a question, the orchestrator creates a
plan and delegates tasks to specialized agents.
Data retrieval is handled by two agents that craft queries,
pull the telemetry, and retrieve the data.
Once the data is in, the analysis agent takes over.
It generates Python code to turn raw telemetry into actionable
insights.
Finally, the orchestrator compiles everything and delivers answers to the
user.
The Microsoft Agent Framework ties it all together.
It gives us a robust open source toolkit to define,
connect and manage these agents.
And the Foundry Agent service is our cloud based garage.
It deploys, scales and monitors agents automatically.
That's great.
It's great to see Microsoft Agent Framework and Foundry Agent
Service working together.
We're going to talk about that a lot today.
So you reported like a 10 times faster kind of
iterative loop for for access for your engineers.
What other kind of measurable outcome outcomes have you had
and where do you see this going next across your
R&D?
We've moved from static data access to dynamic agent driven
insights.
The impact of our multi agent system is clear.
Broad access to all telemetry data for all engineers and
advanced analytics at scale.
Our vision.
Is to empower every development team to build their own
specialized agents.
Agents to analyze all vehicle data using the knowledge of
as many engineers as possible.
This ecosystem of intelligent agents acts like a digital Rd.
companion, proactive partner in engineering, leading, learning, adopting, and helping
engineers focus on innovation.
With every new agent, we move closer to a future
where automotive development is faster, smarter, and more collaborative than
ever.
And we are so excited to continue this journey together
with Microsoft and the developer community.
Great.
Thanks Christophe.
If you want to find out more about what BMW
is doing, we've got AQR code here.
And if you want to go get in contact with
Christophe, there's this.
There's this information.
Thanks Christophe, That was great.
Thank you.
So as we worked with folks like BMW and our
partners over the course of the of the year, we
know we got a lot of questions from them about,
you know, how to start their agentic journey, how to
build these agent systems.
They had questions like, you know, where do I start?
What frameworks do I use?
How do I connect my agents together?
How do I connect them to other services?
How do I, how do I know what my agents
doing?
How do I evaluate them?
How do I deploy and manage my agents, get them
up in in into the cloud?
And how do I build good user interfaces on top
of these agents?
So I'd like to bring Mark up here next to
talk about how we approach some of these challenges and
some of the great new features that we have in
the Microsoft Agent framework to help you build your agentic
systems.
Thanks Mark.
OK, that's great.
OK.
So, yeah, thanks a lot, Sean.
Yeah.
So I'm going to start talking about the, the agent
framework right?
So the first step was earlier this year, there were
two teams, the semantic Colonel team and the older Gen.
team.
They were merged together with the view to converging the
SDKS that the the teams were building.
What that kind of turned into is the Microsoft Agent
Framework.
Microsoft Agent Framework is a new open source project that
has 2 core capabilities.
It provides the ability for you to create agents using
any service.
You can create your agent, you can give it a
a model, an LLM for reasoning.
You can give it tools to take action to retrieve
a different additional context, and also to to generate and
use memories.
In addition to the agent capabilities, there's also a workflow
builder that allows you to create deterministic workflows and this
allows you to orchestrate multiple agents.
So the combination of non deterministic agents and a deterministic
workflow is what allows you to build reliable and multi
agent systems.
So the the Microsoft agent framework term has actually become
an umbrella term that incorporates more than just the open
source project.
The open source project is a great place to go
if you want to see how to use Agent Framework
with lots of different technologies that are available within Microsoft
and also external to Microsoft.
But Agent Framework is kind of like an umbrella for
all of the pro dev agenta capabilities that are being
developed inside of Microsoft.
OK, so the Microsoft Agent Framework as it currently stands
hasa.net and a Python version.
They're in public preview at the moment, but the team's
working very hard to get them to Georgia.
So early next year all of this stuff will be
generally available.
We you can use agent framework as a pro developer
to build agents and multi agent systems using Foundry.
So it layers very nicely on top of Foundry and
takes advantage of all of the capabilities you get in
Foundry, all of the models, all of the tools, all
of the the memory capabilities.
In addition, like one of the questions we get asked
a lot is, you know, where are you using Agent
Framework internally within Microsoft?
So Agent Framework, the workflows part of agent framework is
part of the Foundry service.
So Foundry has a new workflow capability and that's built
on top of Agent Framework.
So Foundry itself is kind of customer number one for
for Agent Framework.
In the development of agent framework, we've adopted open standards.
So you can use MCP to retrieve additional context for
your agent.
You can use A to a for interagent communication and
you can use a protocol called AGUI, it's Agent user
interface protocol to build your UI for your agentic application.
The agent framework is also extensible, so you can plug
in different agent services, different model providers, different memory services.
The Anthropic announcement yesterday is a really good example of
the extensibility of agent framework.
Right now, if you use agent framework, you can use
the Anthropic models to to build agents.
So anthropic models deployed on on Foundry, you can use
those in the agent framework straight away.
So we were able to basically add that functionality in
very, very quickly.
There's lots of extensibility mechanisms available and there's lots of
teams within Microsoft who are building extensions to agent framework.
We're also, we're encouraging third parties to build extensions as
well.
OK, So another question we get asked a lot is
what if I'm using autogen or using semantic kernel?
How do I, what's your recommendation?
So our recommendation is that for any new development use
Agent Framework.
It'll be generally available early next year.
For existing, if you have existing code, we're going to
be maintaining Autogen and Semantic kernel like we've done recent
releases, but we'll definitely want to encourage you to migrate.
There's a lot of capabilities that are in Agent framework,
really interesting capabilities that aren't going to get added to
the the other SDKS.
So migrating makes a lot of sense.
To help you with your migration, we have the GitHub
Copilot app modernization extensions.
They're available in Visual Studio and Visual Studio Code.
What they give you is a one click migration story
for your existing your existing code.
When you start the migration process, the tooling will analyze
your code base and it will come up with a
plan to migrate it to Agent Framework.
It will then start the migration process and keep track
of everything that it's doing as it's migrating your code.
So you'll end up with a a plan of execution.
It will run all of the tests associated with your
with your code base and if there's any static analysis
configured for your code base, it'll run that as well
and make sure the new code is compliant.
When the when the process completes, you'll get a summary
of everything that that's happened.
A branch will be created, all of the code changes
will be committed to that branch.
Everything will be documented.
What will be left for the developer to do is
create a pull request, review all of the code and
then basically kind of like merge that in We're this
is all available right now.
The extensions are available on the Visual Studio Marketplace.
We're really looking forward for people to to start using
this and give us feedback.
If there's something about your application that the the tooling
doesn't work well with, let us know and we can
we can fix that.
OK, so another an interesting extension that's just been announced
is the durable task Extensions for Microsoft Agent Framework.
What this allows you to do is build durable agents
and durable agent orchestrations.
So with the best will in the world, something will
go wrong when you've you've deployed your agent, a service
may become unavailable, an agent may crash.
What the durable task extension does is it will create
checkpoints as your agent or your multi agent orchestration runs.
And then if something something goes wrong, you can basically
restart the agent or restart the orchestration and it can
pick up from the last checkpoint.
It doesn't have to go right back to the start
of the operation.
It can just kind of pick up and and continue
on.
This has some very interesting implications if you're dealing with
human in the loop scenarios.
So for human in the loop, if an agent or
a workflow needs human feedback, it may take minutes, may
take hours, or may take days for that feedback to
come in.
And what you can do with the durable task extension
is that you can spin down the compute that's running
your agent or your orchestration, wait for the the human
response to come, come back, and then you can basically
spin everything back up and then continue with the execution.
The durable Task Scheduler actually handles that process.
It actually comes with its own dashboard.
The dashboard allows you to see in real time what's
happening in your agents and what's happening in your in
your orchestrations.
So it gives you an extra perspective.
You can actually kind of go back to different states,
different snapshots for your agent and actually inspect kind of
like, you know, what was going on at that particular
point in time.
OK, so I'm going to just switch gear now and
switch to doing a a couple of demos.
OK so.
OK, so we're going to start by looking at some
code right?
This is the code to create a workflow.
The workflow I'm building is for an event planning scenario,
right?
So there's a couple of things that I need to
consider.
I want to be able to select a venue for
my event.
I want to manage the budget, I want to manage
catering, and then I want logistics coordination.
So what I've done in this particular application is that
I've created an agent, agent for each aspect of the
of the workflow.
Then I have one agent that's basically the coordinator and
it's going to coordinate the execution of the the entire
workflow.
There is all of the agents are basically wired together
so that they can talk to the to the coordinator.
Now, one of the things that we're providing in agent
framework is a tool called the the dev UI.
So when I go, when I run this workflow in
the dev UI, it'll build the workflow and it can
create a visualization of the workflow for me.
So straightaway I can see this is much more intuitive.
I can see kind of like, you know how the
different components in this workflow kind of relate to each
other.
So I'm going to configure and run from this.
OK, So what I wanted to do is I wanted
to plan a party for 50 people with a budget
of $5000 and I'm going to run the workflow.
So you can see straight away I get a visualization.
The coordinator took the request, it's passed it on to
the the venue agent and I can see that the
venue agent is is working on this at the moment.
It takes a couple of minutes for this to run.
So I have a version of it that's completed and
I'm just going to talk about some other aspects of
the the dev UI.
So you can see here everything is completed of a
visualization.
Everything is marked as green.
There's a timeline of execution.
So I can see that it started with the event
coordinator that took the original request, then it passed it
on to the the venue agent.
The venue agent then.
Provide produce some output right.
So the venue agent actually needed to know where I
want this this event to be planned.
So if I I come back over here, you can
see this is the the human in the loop interaction
that I'm having with this workflow.
So the prompt here is asking me a city or
a neighborhood for the event.
So I'm just going to type in Seattle and, and
then it'll, it'll continue now it'll pass on that additional
information to the the venue agent and the venue agent
can continue with its work.
So yeah.
So I can see the entire workflow or timeline for
the workflow.
It goes back and forth between the different agents and
the the event coordinator.
The other thing I can see is also the traces
associated with this.
So these are open telemetry traces and I can see
kind of like different aspects of what's going on within
this workflow.
So there's a an executor process that's part of the
workflow execution.
In this particular case, what it's doing is it's invoking
an agent here the telemetry is the, it's the open
telemetry Gen.
AI standard.
That's the convention that we're adhering to.
So we're calling the event coordinator agent.
We it in in turn is calling an LLM and
it's using GPT 5 mini.
I can drill into this and I can see the
details of what that specific agent is sending to the
model.
The other things I can do in dev UI is
I can also kind of like see any tools that
were executed by the agents.
So these agents are using kind of web search.
So there's a web search tool call here.
It's sending a request to identify venues in Seattle and
then I can see the responses that kind of came
back from from the web search.
So I can, I can not only visualize my entire
workflow, I can also interact with the individual agents, right?
So all of the agents that make up this workflow
are listed here.
I can bring these up and I get a basic
chat interface and I can interact with those agents.
So I can stabilize each part of my workflow in
isolation.
I'm using this tooling.
OK, so once I have my workflow built and I'm
happy, the next thing I need to start thinking about
is deploying it.
And when I go to deploy it, I want to
make sure that it's secure.
I want to make sure that no sensitive data, say,
say my company has a policy where we don't want
to send any sensitive data to a large language model.
So there's multiple options for integrating data security and data
governance with Agent Framework.
But we have one solution that uses Microsoft Purview.
So with Microsoft Purview, actually let's take a look at
the code first.
So this is what you need to change in your
code to use Microsoft Purview, right?
So Microsoft Purview is an offering that provides data governance
and data security capabilities.
So I'm creating Purview policy middleware that basically I'm going
to inject into every agent I'm I'm using in the
workflow.
So I create the middleware, I all I'm doing is
giving it an application name.
This is the code to create my agent and I
can just provide whatever middleware I want.
And that middleware, any time this agent is executed, that
middleware will also get triggered.
So it'll be called before the agent is run, so
it'll be able to look at what date has been
sent to the agent and it'll also get triggered after
the agent has executed so we can see see the
responses.
So this gives it the ability then to kind of
block certain requests if there is sensitive data being sent
to an LLM.
So operationally, what that looks like is I can go
to the Purview Activity Explorer and I can see all
of the interactions with the this particular agent, right?
So if I click on one of the interactions here,
I can see when it happened, who it's from and
basically what information they sent.
So this is a fairly ordinary request just looking for
the, the capital of France.
I can also see the the response to that what
came back from the from the model.
Now there's a couple of interesting activities here related to
sensitive information.
If I open up one of those, I can see
again who sent the request and I can see what
type of sensitive information was included.
So in this case, it was a, a credit card
number or a Social Security number.
I can go to the related AI interaction and if
I open that up, I can see the exact message
that was sent to the model.
So if I just Scroll down here, when the message
loads, you'll see that it includes a a credit card
number and also a Social Security number.
So, so, so now I've got my workflow built and
I have kind of like my security concerns kind of
like addressed.
The next thing I want to start thinking about as
I build this application is I want to put a,
a user interface on top of the application.
So there's two options.
I'm going to go to the Super fast.
First one is using open AI, check it.
This gives you a set of components, basically reusable components
so you could embed a chat into any of your
applications.
There's also some back end components so that you can
kind of build your own server.
In this case, our server is using Microsoft Agent Framework.
If I send a request, you can see I'm getting
kind of like a normal text response back.
I'm also getting this widget.
The back end calls a function and that function returns
all of the weather information as structured data.
I can then convert that into a widget, send the
widget to the the UI, and the UI then can
render it right like so.
This allows me to go beyond basic chat chat interfaces.
There's another protocol called Agent UI Protocol that we support
as well.
Again, here I get base a basic chat interface.
And what's interesting if I send it a standard request,
I'm sending it a request and I'm sending it a
tool and the agent is deciding to run that tool
and it's changing the background.
So it's actually changing the UI of the application that
the the chart is embedded in.
Another interesting example of this is you're having shared state
between your agent on the back end and then the
the front end.
So in this case, I've got some UI that can
display recipes if I ask the get.
Let me, if I ask the agent for a recipe
for for coddle, so-called lime Irish, and coddle is a
traditional Irish recipe, it'll set, it'll call the agent, get
structured data back which describes that particular recipe, convert it
into state, send the state to the client, and then
the UI can update.
I also kind of get a text response back, the
normal text response you get.
OK, So that's me.
OK.
I love to see live demos.
Those are great.
Great to see.
So we built our agent system.
We've got it running now.
We've got to host it somewhere, We've got to deploy
it somewhere.
We've got to manage it.
We've got to integrate it with our other systems.
So let's switch over here back to here.
Let's bring Tina up.
You want to come on up and you can talk
about how we can get our agent systems that we
built an agent framework up into foundry.
Come on up.
Thanks John you.
Know it's such an exciting time to be alive where
all of these software systems are starting to transform into
being more and more autonomous and we are evaluating really
the potential and risk of all of these autonomous systems
and all of the use cases that are actually driving
real value today involve a lot of customizability in code
and these pro code agent frameworks make all of this
easy with the built in primitives and orchestration patterns that
these support.
But when you actually want to go and operationalize it
and take your prototypes to production, that's where the challenge
is and that's the challenge that we're looking to solve
with hosted agents and foundry agent service.
We're thrilled to be announcing this in preview this Ignite.
This will enable you to come deploy, manage and operationalize
your pro code agents that you've built in custom code
and be able to use that with the rest of
the platform services there's.
Nothing up there, Tina.
Let's get this.
I'm going to press forward there.
That's what we like to see, right?
OK Yeah.
Yeah.
So this is going to enable you to deploy, manage,
and operationalize your workflows.
We're bringing this with first class support for Microsoft Agent
Framework in Python And C, Landgraf and Python, and more
frameworks to follow.
You can also start ground up with your own custom
code with Python And C as well.
Now all of this runs on a serverless secure managed
runtime which auto scales based on your usage.
All of this runs in your own dedicated execution environment.
Your context is managed using the responses API and you
can essentially implement versions of your hosted agents for different
configurations that you want to have and be able to
manage them across the end to end life cycle of
how you create them, how you update, start and stop
and perform different operations of your agents.
And all of this comes with integrated set of tooling
that we have on Foundry.
So you have to write less Polymer code, less implementations
in your code.
So you don't have to implement your own MCP clients
in the framework frameworks that you're writing.
And essentially just configure these tools once on Foundry and
be able to use the same connection that you have
to these tools in your Foundry project and plug and
play with the code that you're writing to be able
to just use these tools across different agents on Foundry.
All of this comes with the managed authentication as well.
So you don't have to implement your authentication flows in
the code.
Now, observability and monitoring is supreme importance.
And so you're going to have all of this in
the open telemetry semantic conventions.
All your logs, metrics, and traces are going to show
up in the application insights resource resource that's going to
be associated with your Foundry project.
So you can monitor your agent execution steps, the performance,
your operational health of your agents and be able to
govern them from there.
Now talking about how you want to be able to
make sure that the non determinism of these agents as
well as the quality is in check.
You want to be able to run continuous evaluations on
them.
Use, use synthetic data or like use human evaluations and
be able to set guard rails on different risks and
harms that could be output from these agents and be
able to like have compliance checks against these guardrails that
you're setting and be able to take operations for those
compliance checks that have.
And all of this can be done using the foundry
control plane that we've also announced at Ignite.
Now let's talk a bit about interoperability.
All of these agents can be deployed with activity protocol
support by the service so you don't have to implement
it.
And once you have them up with the activity protocol
spot, you can have your SAS based agents in copilot
studio invoke these Foundry hosted agents as a part of
your workflows.
Now you can also publish your hosted agents to N365-A365
and teams and be able to create digital workers across
your organization.
All of this with the non negotiables of the enterprise
security and data sovereignty, where you want to have all
of these agents run in your own private virtual network
and be able to use your own resources in your
own Azure subscription to be able to store the state
of your agents as well.
Now let's dive right into the demo of how you
would go ahead start from your local development to actually
go ahead and host this on Foundry.
Now what I have here is an agent framework code
which essentially is modularized.
I have this this piece of agent Azure Open AI
client executed here implemented here which is just my agent
and then it it is associated with tools object which
all.
Right, Tina, this is twice I had to bail you
out.
You owe me lunch now.
Sorry about that.
Yeah, let me start over.
So like starting over from local development, I have this
agent framework code which is essentially modularized.
You're going to see this this package here, which is
the adapter package, which essentially does a couple of things
for you to be able to make this deploy on
Foundry.
So this essentially implements all the translation classes for your,
for the framework that you're writing to.
The response is API.
So essentially these are deployable on Foundry and integrated with
the rest of the Foundry platform.
These also implemented the tools client libraries to be able
to invoke your Foundry tools that have been configured on
Foundry locally as well.
And then for all the open telemetry packages that you
would need to essentially instrument your agent code so you
don't have to, all of this is embedded and imported
when you import this package.
Now talking about the agent code in itself, it's modularized
in a way that you have your Azure Open AI
Chat client here, which is essentially like declaring your instruction
for your agent.
And then I have these set of tools that I
have on Foundry essentially connected, which this is a connection
that I have created on Foundry, and I'm just going
to reference that connection ID here.
Let's go ahead and quickly see what that connection looks
like on Foundry.
This is the connection that I've created to a Hogging
Face MCP server from the tools catalog, and that is
the connection ID that I'm going to be referencing here
as well.
So I'm going to be plugging in my tools to
the agent and wrapping my agent code with the adapter
so I have it exposed to responses API.
Now I'm going to quickly go ahead and run this
and this should spin up a local HTTP server for
me that I can go ahead and invoke and test
my agent with.
Let's see, it's going to take a while.
All right.
While this happens, let's see if you guys want to
switch over to deploying to Foundry already, OK, OK.
How about we speak about AZ?
OK, it's running now.
It's all there.
OK, So you can see it's starting my agent server
and then it should it should spit out the local
host endpoint that I can invoke.
So.
You can see now this is running on my local
host endpoint here.
And now if I want to just go ahead and
invoke this for me, what I'm trying to do here
is list all the tools that are exposed by this
Hugging Face MCP server and see if locally I'm able
to call this connection that I created on Foundry.
And so it should essentially list all the tools based
on the token that I configured for my connection, which
essentially is based on the set of like permissions that
I've given while I generated that token.
OK, looks like a lot of tools being retrieved here.
OK, so this is done so you can see the
list of tools are basically the set of operations that
I can do with this MCP server.
So now all all of this works well locally and
now it's time for me to go ahead and deploy
this in Foundry.
So for me to go ahead and deploy this to
Foundry, we're going to be using the Azure Developer CLI.
I want to get a quick show of hands for
folks who are familiar with the Azure Developer CLI here.
OK, quite a few of them.
But for the folks who are not, it's essentially the
open source tooling that we have which simplifies the provisioning
and management of Azure resources.
It's essentially combining your infrastructure as code and your app
development.
So it can provision your resources, it can set up
identity and your role based access for the resources that
you want to tie together.
And all of this is done using a Bicep or
Terraform.
And then it can also deploy your code.
So if you have a project which has an agent
and a web app and you want to be, ideally
you want to be having to deploy this in the
same environment.
And so we came up with a specifically specified agent
extension, which is the Azure AI Agent extension to be
able to deploy hosted agents as a part of like
your AZD deployment experience.
And so before I get started, I'm going to 1st
create a new directory to make sure all my files
are clean.
And now I'm going to 1st go ahead and download
the AZD template for me, which essentially has everything that
I need to be able to work with Foundry.
So this is going to install all the infrastructures core
files for me right here on my local machine.
I'm going to set a unique environment name here demo.
And usually when you're provisioning resources as a part of
your AZD experience, it uses the environment name for, for
you to like provision all of those, use this for
provisioning all of those resources with the name.
So it doesn't have to prompt you for all of
that again and again.
So you can see your project is now initialized.
The next step for you is to essentially tell AZD
which is the environment you want to actually be able
to initialize here.
And so in this case I have a project already
in place which has everything that I.
All the resources, all the connections that I have and
I want to be able to deploy this in the
same project.
But you could be wanting to do something from scratch.
You could be having a project having some of the
resources in place, not having some of them and wanting
to provision them as well.
And I'm going to be talking about how that experience
is going to look like.
But in the case you have a project already, you're
going to be initializing that in the context of that
project here by saying AZD AI agent in it and
and giving your project's resource ID.
And so you're going to see it's going to set
a bunch of environment variables for me based on the
project's resource ID here.
So it's setting the resource group, it's setting the account
name, and it's identifying that I have a connection to
the container registry already here.
And so now if you want to go check how
this repo is looking here, right here, so your infra
folder is going to have all your biceps.
Your Azure dot yaml is essentially A declarative way of
showing what your services are going to get deployed eventually.
So you're going to see that this currently doesn't have
any registration for an agent.
And for that purpose, we're going to be now using
the Azure AI Agent extension to be able to initialize
that that agent agent's configuration, which essentially is the definition
of how the tools, the models, the behavior of that
agent.
And that essentially is going to look something like this
in this repo right here that I'm going to try
and deploy.
So you'll see that this is where I'm saying that
this agent actually requires a GPD 5 model.
So it's going to go check if the GPD 5
model is available in that project as it is, and
then it's going to go ahead and ask me for
a bunch of other compute information that I need to
be able to deploy this agent.
Then I'm going to go configure the min and Max
replicas that I need for this agent.
And now my agent definition is registered as a part
of this project and I can go and check how
this looks like and verify you can see that this
agent is now registered here.
And so it has the context, the configuration of the
agent that I actually want to go and deploy.
And so it is now as simple as just saying
AZD deploy for you to like push this to cloud.
Now let's say you were doing this in a project
where you didn't have some of the resources, like you
didn't have an Azure Container registry or you didn't have
your app insights configured.
You would essentially have to do an ACD provision first
before you run an ACD deploy.
So you're going to see how this is packaging my
code.
Now it is going to build that image for me.
This is going to do a remote build.
So this is building this image using ACR tasks.
So you don't need to have Docker on your machine.
And it's going to push that image on the ACR
that is associated with my project and figure out the
identity that my project needs to be able to pull
that image and then go ahead and deploy.
This is going to take a while, so I am
going to switch over to Foundry UI where I already
have this agent in place.
So you're going to be able to see all your
hosted agents once they are created along with the workflows
and prompt agents that you have on Foundry here.
And I'm going to go ahead and see if I
can prompt this for, say, generate anime like picture for
a girl in the city using one of the hugging
face spaces.
Actually, I'm going to say using Quinn Quinn's hugging face
space.
And So what it's going to do is it's going
to use the authorization that I have using that token
to be able to generate that image on behalf of
me.
And while it does that, I want to walk you
through some of the operations that you can do with
this agent.
So you're going to be able to see the list
of the details that you have for this agent on
the left.
So you can see the container image, which is associated
with that image.
With the agent here, you're going to be able to
do a couple of operations, like being able to stop
your agent.
So this is the fourth stop.
You're going to be able to delete your agent deployment
if you want to deprovision your resources, you can see
I have two versions of this hosted agent.
So you can also go here and compare the two
versions and be able to see how the two versions
are able to like perform for the same prompt.
For example.
And let's say I have like this V1 of this
agent, which is essentially stopped right now.
So you can go ahead and click start here to
like start it in a couple of seconds right here
all from Foundry UI.
If you want to invoke this using code, you can
just go here and click here for like provisioning your
VS Code in web which essentially installs all the dependencies
has this set up in place for you to essentially
just run the script and invoke it.
You can see this is actually like finished processing.
So this has returned this image.
Very good, I was thinking.
So thank you, Tina.
What I love about being able to show here is
that we've been able to show you how you can
take your code in your environment in Visual Studio Code
and deploy it into Foundry.
You can try this out today.
Build your agents, build your agent systems, get them deployed
into Foundry and use them just like your agents that
you've built directly in Foundry.
So something you can try today, give it a try
and, and, and we'll make that available with, with pricing
available pricing details in February.
OK, so we talked about some great things we can
do about using Microsoft Agent framework, using MCP and A
to a multi agent workflows, telemetry purview hosted agents and
then UI with chat kit and Agui.
So we have some great solutions for all, all the
things that you're seeing when you're building your agents and
and deploying them into Foundry.
Some quick notes, let's just move ahead.
I think we're almost out of time.
If you want to find out more, go to ai.azure.com.
You can try it, our new portal.
If you want to try the Microsoft Agent Framework, AKA
dot Ms.
Agent framework, go to GitHub, search for Microsoft Agent Framework.
You can find it there.
A bunch more sessions that you can go and check
out.
And then I believe also Mark is running a lab
this afternoon over in Moscone W.
There might be a couple seats left where you can
go and try out Microsoft Agent Framework for yourself.
Thank you for your time.
We'll be hanging out at the booth after this and
I have some stickers if folks want to come up.
If you'd like some, some swag, I mean, come up
and say hi.
Thank you everyone.
Build multi?agent systems the right way with Microsoft Foundry. Go from single?agent prototypes to fleet?level orchestration using the Foundry Agent Framework (Semantic Kernel + AutoGen), shared state, Human in the loop, OpenTel, MCP toolchains, A2A, and the Activity Protocol. Bring frameworks like LangGraph and OpenAI Agents SDK, then deploy as containerized, governed, observable agents on Foundry. Delivered in a silent stage breakout. To learn more, please check out these resources: * https://aka.ms/ignite25-plans-agenticsolutions ๐ฆ๐ฝ๐ฒ๐ฎ๐ธ๐ฒ๐ฟ๐: * Christof Gebhart * Shawn Henry * Tina Manghnani * Mark Wallace ๐ฆ๐ฒ๐๐๐ถ๐ผ๐ป ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป: This is one of many sessions from the Microsoft Ignite 2025 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com BRK197 | English (US) | Innovate with Azure AI apps and agents, Microsoft Foundry Breakout | Expert (400) #MSIgnite, #InnovatewithAzureAIappsandagents Chapters: 0:00 - Introduction to Day 2 of Ignite: AI-powered Automation & Multi-Agent Orchestration 00:07:09 - Impact and Future of BMWโs Multi-Agent Ecosystem 00:08:40 - Microsoftโs Lessons from Partnerships: Common Agent Development Questions 00:12:09 - Foundry integrates Agent Framework for new workflow capabilities 00:14:13 - GitHub Copilot extensions simplify migration via Visual Studio tools 00:22:38 - Integration with Microsoft Purview for data governance and secure agent deployment 00:23:27 - Creating and injecting Purview policy middleware into agent workflows 00:25:16 - Detecting and analyzing requests containing sensitive information like credit cards and SSNs 00:39:12 - Setting up AZD Template and Environment for Foundry