Loading video player...
Welcome everyone. It's great to be here
with you today. Thank you for joining
our webcast. My name is Ben Henderson,
senior security engineer here at
Microsoft and I have the honor of being
here with the amazing Innocent Wafoula,
senior product manager for security
co-pilot. Innocent, how are you today?
>> I'm doing good. How about you? Thanks
for having me today, Ben.
>> I'm doing great. I'm doing great. I'm
excited about this. You know, security
co-pilot's been near and near near and
dear to my heart since gosh, since it
launched, we've been working together.
>> Sure.
>> Wonderful. Well, I have some great
questions for you. You know, working
with security co-pilots and working with
our customers, and I know they would
love to hear your perspective on this,
but let's start with the basics. What
are AI agents and how do they differ
from the tools that we have been working
with so so far in cyber security?
>> Yeah, absolutely. Great place to start.
So I think the the way to look at agents
is that they're like a new category of
software that leverage the power of
large language models uh to plan, reason
and take actions to achieve goals
without following a predetermined path.
And that's really the key thing there
that they are not um they don't follow a
predetermined path to arrive at their
output and that's what really gives them
this agency. After all they are called
agents and I always say that an agent
must have some agency. So this is really
the key difference that ability to tap
into a large language model which kind
of confers some uh human intelligent or
cognitive capabilities to be able to do
certain tasks that weren't possible
before to be delegated to uh machines or
software. So this is really the
difference that's happening here.
>> I love that and I will be using it.
Agents need agency. That make that
really makes a lot of sense. So why now?
>> To illustrate this much more clearly,
let me um show you here in a moment how
this actually works. So the whole flow
of how an agent actually works and the
components that are in there. So it
begins with the agent itself. So
normally agents react to triggers. This
trigger could be manually executed. It
could be an alert arriving and the agent
seizing that alert and then proceeding
to start to action it. And then the
first thing that the agent will do and
this starts to show that unique
capability that agents have in relation
to other traditional types of of
software is that it's actually able to
plan to figure out I have this task. How
am I going to go about it? The next
thing it would do is to pick up context
from either long-term memory or
short-term memory. So the way to think
about this is for example the agent
might actually tap into its long-term
memory to get some insights into how
it's been directed by the user in terms
of uh how it should handle certain
specific situations that are uh specific
to the customer's environment which is
another great thing that or rather great
advantage that agents have because
they're actually able to learn over time
and get better. So once it goes through
that process, gets a context from the
short-term or the long-term memory, it
now again autonomously
decides what is the best tool to perform
this task. So it goes into its toolbox,
selects the tool that it requires to
perform the job and then executes that
tool. So that tool could be making an
API call, invoking a plug-in and more
recently uh tapping into an MCP server.
And again throughout all this process
you'll see up there it's an iteratory
process and this is where the recursive
reasoning aspect of agents come in
meaning because they're not following a
predetermined path. If the path that
they originally set out to follow
doesn't result in the outcome that they
are expected to produce they have that
intelligence to say you know what I have
another I'll try another tool I'll try
another route until I get uh this job
done. So um in summary the key
characteristics that distinguish them
from typical software or the traditional
tools that we've been using to solve
these types of problems that ability to
reason recursively the autonomy to plan
and to take different routes to reach
the goal and the ability to take the
right action based on the plan that it's
made and then the ability to adapt over
time without necessarily being
reprogrammed. These are inputs that come
from the user as natural language that
hey you know what in our environment we
disregard this type of alerts don't
worry about it and the agent will learn
next time it's not going to worry about
that particular alert. So that's really
uh the distinction that agents bring uh
to the picture. Wow, that is so that's
such a great graphic and it's such a
great way to view agents the iterative
process of that task completion and
that's learning for our customers
businesses 24 hours a day. It's always
this is a constant process.
>> Let me ask you this. What is driving
this shift right now from your
perspective? You have a view into all of
Microsoft. What's driving this shift to
AI agents?
>> Yeah, I I think it goes back to the
perennial problem. So too much work to
be done in cyber security, too few
people to do it. So lots of alerts and
with the digital transformation that is
just continuing, we we I don't think we
can expect the number of alerts to
reduce. So this is is a challenge that's
been there for a while. But what's
changing now is that with the rise of
LLMs, it's now made possible
new a new class of AI apps. So these
apps that are now powered by large
language models and are able now to do
tasks that require some amount of
reasoning and then they have this uh
level of autonomy. This is really what's
changing. So that means that we can now
progressively delegate some of these
tasks that human beings had to do
because we have some level of
intelligence within the machine that is
essentially being brought about by large
language models that are powering
agents. So I think this is the shift
that's happening and we are going slowly
by slowly delegating more and more tasks
to agents um and and seeing how they're
making a difference to us.
>> That's awesome. What is what is the role
Microsoft security co-pilot specifically
playing in this transformation?
>> Yeah, so as Microsoft we really stepped
up to empower defenders. Uh what we've
done is that we now have a fleet of
agents uh that are able to help
customers with certain tasks. Um we have
first party agents. We also have agents
from our ISV partners. In total we have
around about 40 agents and counting.
>> What agents are available today for
customers to use. That's amazing. 40 of
them. And I I'm assuming they're growing
uh every day.
>> Yeah, sure. the the the list continues
to grow and and um um I I can show you
here in a moment what we actually have.
So let me pull up
>> that sounds great.
>> I know we have you know first party
Microsoft agents we have third party
agents and then
>> sure
>> agents mult agent multiplications with
all the agents that customers are
building for their own.
>> Exactly. So so this is where you start.
So you you come to the of course the
security cilot portal and then you have
an option down here uh to to access the
store. So this is newly launched at MS
secure. Uh what you'll find here is that
you can filter by our first party agent.
So these are the ones we've got oops six
of them. Uh access review agent is the
latest in the fleet uh announced
recently. We've got the fish triage
agent that's gaining a lot of traction
conditional access agent and so on. And
then we have our partners as well. So
remember security is a team sport. So we
do not claim to cover all angles and we
are very happy to partner with our ISV
ISVS and uh here they are with quite a
number of agents and growing. But that's
not all. So if you wanted to build your
own agent so let's say you say well you
know what I want to build my own I have
this niche use case or this specific use
case that's currently not covered. No
worries we you're welcome as well. What
you can do is that you can build the
agent. You can do it declaratively. If
you think about or if you're familiar
with Microsoft Copilot Studio, you can
just say what kind of agent you want
natural language and the agent is built.
So that's one of the ways you can do it.
If you're a procoder, that's also fine.
You you can build it in your own ID
environment and then come in and upload
your YAML file and deploy your agents
and you're ready to go. So
>> that is so cool
that you know and what I love about it
is that and I you hear this a lot at
Microsoft you know now which is so true
the AI is becoming the new UI the new
user interface you have the full power
of full code or you can just write a
prompt and it is amazing let me ask you
this question what real world use cases
and impact have you seen?
Yeah, absolutely. So, um we have had
reports from several customers reporting
significant uh reduction times
especially in triage of the user
reported fish alerts. In fact, one
customer reported that this agent, the
fish triage agent is serving them nearly
200 hours a month. Uh what it does of
course is take this numerous user
reported fish false positive alerts and
deals with them. So um giving the
analyst time back and we're also getting
similar feedback from the early adopters
of other agents. So such as the
conditional access optimization agent,
the vulnerability remediation agent in
Microsoft in tune is also making a
difference uh for our customers and with
the recent launch of the access review
agent. We are looking forward to hearing
from customers as well. This agent is
going to proactively uh scan access
reviews in your tenant, analyze any
identified reviews, gather extra
insights and then generate
recommendations giving you an option for
or giving a recommendation to approve or
deny and a justification as to why uh
the agent reached that conclusion. So
looking forward to getting feedback from
that one as well.
>> That sounds great. Let me that sounds
really great. It's so transformational
having worked in socks operation
centers, deployed security programs to
be able to, you know, leverage an agent,
you know, to do that. But let let's talk
for a moment putting on our my CISSP hat
or our security hat. Let's talk about
risks, right? Risks and efforts and, you
know, the road ahead, you know, like for
example, customers, you know,
>> hallucinations, right? We have now, you
know, a lot of rag and uh grounding that
we can help to reduce hallucinations.
what else you know what other what are
what other risks can we mitigate or are
we mitigating today?
>> Yeah. Yeah. So first of all
hallucinations are an inherent challenge
with large language models and this
really stems from the way they are
trained. So that probably is not going
away for a while but as you said yes we
can do something about it and we are
doing something about it. You you
mentioned one of them uh retrieval
augmented generation. So what this means
is that uh there is a base model that's
been pre-trained uh but now especially
in the case of security copilot we are
actually dealing with information unless
you ask it a generic question it is
actually using information from your
environment. So let's say sentinel
alerts defend alerts that means it's
already grounded in that aspect because
it's not making up alerts it's it's
working on the basis of real alerts from
your environment. So that helps to
minimize hallucinations. That that's one
of them. Um the other one is
continuously tuning the model. So that's
something that uh we also do. And then
also agents themselves are a strategy to
reducing hallucinations because agents
are built to address narrowly scoped
tasks. So if you think about the agents
I've just shown you here, we've got the
fish triage agent. It doesn't pretend to
do anything else. It just does fish
triage. You have the access review
agent. That's exactly what it does. And
what that means is that you can now
limit the scope of inputs. And when you
limit the scope of inputs, you're
limiting ambiguity and that also helps
eventually with the output. So that's
that's another uh way to to think about
how we are addressing uh the problem of
of hallucinations. The other one is the
most effective one and this is the human
in the loop. So at the end of the day
you will see in all these agents there
will be u a place where the human is
required to approve the recommendation.
So you're not just expected to take
everything that comes from the agent. So
you have that final say you know what uh
I know my environment better this one
I'm not going to go with I'll go with
the other one. So those are the
strategies that uh we are employing to
to build trust and to to help our
customers have confidence in adopting
the agents and our other AI solutions.
>> That's awesome. That is that's really
great. I love that Microsoft, you know,
being a a security practitioner myself,
I love that Microsoft put security,
trust and safety first, right? Our
responsible AI stack and our secure
future initiative. What are you hearing
from customers about ethical and
regulatory concerns around the world in
relation to AI and AI security?
>> Yeah, of course, those are always going
to be there and the thing is remember AI
apps are a special class of software.
They have this ability to exercise some
human cognitive abilities and that's
what sets them apart and requires a
different approach to addressing the the
legitimate concerns of our customers. So
indeed uh we as Microsoft we have the
six responsible AI uh principles that
guide all our AI work not just for
security copilot literally any AI
products that we build have to go
through this rigorous process of reviews
so the fairness reliability and safety
inclusiveness transparency and all those
six principles we we strictly follow
them to make sure that um we are kind of
self-regulating in in that sense and we
also comply with uh other regulations
that are in force uh some mandatory,
some are discretionary. So things like
GDPR, we are compliant with those HIPPA
and several others.
>> That's wonderful. I mean that aligns
with what we've been doing in the cloud.
So it just makes sense. Uh let me let me
ask you one last question. I know your
time is valuable, but let me get one
last question if I may.
>> Yes.
>> Give us a little treat, a little sneak
preview. What can we look forward to as
this technology evolves in the cyber
security space?
>> Right. So I think as the technology
evolves we are going to see more compute
capacity coming online. I if I'm not
wrong there are data centers being built
almost on a daily basis. So that that's
one thing that's going to make uh that's
going to bring in more capacity and we
should be able to do more with that. I
think the other thing is learning from
the current set of use cases. So this is
bringing it down to the copilot uh
scope. Uh we already have a couple of
agents out there. I've shown you which
ones they are. So of course and in my
role this is part of my role is to
gather feedback from customers. What do
they think about the agents we've
already released? Are they meeting the
requirements? So we are constantly
improving on those and at the same time
we are also um having a pipeline of new
agents that are coming. So being able to
address even more and more complex use
cases in the future. Those are the
things that we are working on and we are
building on the experience and on the
learnings that our customers have
graciously enabled us to get through
this preview and public preview and and
launch phases. So that's that's one of
the things. The other one is is securing
agents. I mean the thing is it's not in
question that agents have got
transformational capacity and potential.
Nobody's is arguing about that. But I
think the only thing that's going to
dampen adoption or raise questions and
and raise blockers to adoption is the
security of those agents. So that's
another area that we're doubling down on
um working on it already and we do have
a couple of products out there and uh
making them better as well. And the idea
here is to really secure the entire
stack from the infrastructure to the
models themselves all the way up to the
the applications or the agents and to
really um give our customers that
confidence that yes we do have AI
agents. They can do a lot of wonderful
things but they can also do them
securely.
>> I love that that we can bring the whole
Microsoft security platform and the
Microsoft platform to bear on this new
agentic journey.
>> Thank you. Thank you so much, Edison,
for sharing all this with me. You are my
favorite product manager in all of
Microsoft. Will you come back and join
me again when we have more agents and
more to talk about?
>> Absolutely. Always love to talk to you,
Ben. Thank you.
>> Thank you, my friend. Until next time,
anybody and everybody watching this, ask
us questions in the chat. Innocent and
we'll and I will be reviewing them.
Thanks again, everybody.
>> Cheers. Bye-bye.
>> Bye.
In this episode, we explore the rise of AI agents and their transformative impact on cybersecurity. Innocent shares real-world use cases from Microsoft’s Security Copilot, including how agent-driven automation is emerging as a force multiplier and helping customers address their cybersecurity challenges. We’ll also tackle the pointed questions—risks like hallucinations, ethical concerns, and the regulatory landscape shaping the future of AI in security. Whether you're a security professional, technologist, or AI enthusiast, this conversation will leave you informed, inspired, and ready for what’s next. #microsoftreactor #learnconnectbuild [eventID:26371]