Loading video player...
Now you can all right, welcome to the best session
of the day.
We're going to be talking today about dev SEC OPS
in the age of AI.
And I'm Marcelo Levator and I lead code quality and
security products for GitHub.
And today it's an immense pleasure for me to actually
be Co presenting with with my new best friend, Oz
Wilder from the Microsoft defender for cloud.
We've been working on this for quite a few months
now and we're so glad to announce this to everybody
today at Ignite.
A little bit about myself.
So I've, I've been in application security for nearly a
decade and I would love to just just see a
show of hands.
How many people here in the room have security in
their job descriptions?
OK, quite significant.
How about development, All right, It's a kind of a
mixed room that's pretty good like for dev, dev SEC
OPS session.
So look, I'm super proud of the amount of the
evolution that we had on the application security industry in
terms of improving the signal to noise ratio of the
findings and the detections that we provided to our customers.
But there is a sad reality to application security as
a whole.
Like I would go and I would engage with customers
quarter after quarter, year after year and I they would
have the latest tools, the latest dashboards and vulnerability fixes
were just not happening.
I was every quarter like the vulnerability trends keep piling
up and going up into the right and, and the
reality is that I think many times we forget that
application security is actually a team sport.
If you're in security and in application security and you
don't have developers on your side, it's like you're walking
into a soccer field.
For the ones who like soccer, like me, Brazilian guy,
without your attack team, you're just having your defence line
and you're not going to be able to score many
goals.
It's going to be a very hard game for you.
So you want developers on your side.
And most of the time when I was, I would
ask an A security leader, why are you not remediating
these problems?
And the typical answer and the traditional answer has always
been because developers don't care about security.
Have you guys heard this before?
Yeah.
Look, I personally work with thousands of developers throughout my
career, first as a developer, then a development manager, then
a product manager.
And I'll say one thing like I've seen a different
reality.
I've seen developers actually care about fixing their technical debt.
And actually I as a product manager, I hear all
the time like how much my developers want to fix
some of the technical debt that of course we don't
have it at GitHub.
OK, but and, and I think the problem is people
like me, the product manager in the room and Oz
who are always so focused on how do we deliver
more customer value to our customers.
And the reality is that fixing vulnerabilities is not free.
And it doesn't matter what the security tools that you
have tell you that, hey, I'm giving you this amazing
suggestion.
A suggestion is not everything.
You still need to accept that suggestion.
You need to verify the suggestion that it conforms with
your coding guidelines, that it doesn't break something else.
And you need to test the hell out of this
changes.
So because it's not free, it's always a trade off
conversation.
And that's why we see this great divide between development
and security, because security and innovation velocity are seen as
trade-offs.
And I'm a firm believer that until we're able to
much better automate the remediation of vulnerabilities, we're not going
to be able to change this picture.
But here's the good news or the bad news, depending
on how you look at it.
There is a duality to AI.
And I don't think that there is any doubt that
AI is completely transforming the way that we develop software
today.
And there should be no surprise that it's gonna completely
transform everything that we know about application security as well.
But what is it?
Is it a, is it the hero of our story
or is it the villain, right.
Like in I love charts because I'm an engineer, so
I'm going to use some charts here today.
I apologize for the ones who don't like them, but
I think that they tell a a good story and
it's easy to explain the situation.
This phenomena that we're finding more vulnerabilities than we're able
to fix is not a new phenomena.
It's been happening for at least a decade that I've
been working in knapsack and I'm pretty sure that it's
been happening for decades before as well.
And what is the result?
This code accelerates and the gap continues to increase.
We, we see an exponential growth in vulnerability backlogs.
And so it's no surprise than when I speak to
customers today, like in large banks and healthcare companies, etcetera,
doesn't matter who it is, it's not a surprise that
I hear about hundreds of thousands of vulnerabilities or over
1,000,000 vulnerabilities sitting in these backlogs and they're asking, OK,
where do I begin?
Well, how do I start?
It's impossible.
I don't have enough people to fix all of this.
And here's the kicker, AI is here and it's producing
5 to 10 times more code throughput than we had
seen in a very long time.
Let me ask you guys one question.
Do you think that the AI that an LLM produces
is vulnerability free?
No, I'll tell you it's not.
OK.
We tested and we we have the statistics and we
use it all the time because it's still going to
produce first party vulnerabilities.
It's still going to try to use vulnerable dependencies.
And sometimes it's going to leak a secret.
And if nothing else changes here, imagine what's going to
happen if we continue with the status quo and we
start using AI.
This is what I like to call the collapse of
application security.
And I would even say that AI is just not
going to stick.
Nobody's going to be able to take advantage of all
the speed that is being given to them because it's
possible.
It's irresponsible, but I wouldn't be here if I believe
that that's the trajectory where we're headed.
OK, The reason why I'm GitHub and for about a
year now is because I've seen the writing on the
wall and I've been in startups for a long time
in my career.
And I think that there is nobody else who is
better positioned between GitHub and Microsoft to really deliver and
change this picture at scale.
I actually believe that AI is the hero of our
story and the reason being that agentic capabilities like a
compiler, coding agent, etcetera, give us an opportunity that we
didn't have before.
Remember, we are bothering developers with all these findings, all
these things and asking them to change their code.
Now I need to do, I can do this before
they even see it.
Imagine if we can put preventions straight into that agentic
loop and we can fix this problems with the LLM
before they surface for the developer.
This 5 to 10 X increase in throughput of code
is not going to translate in a similar increase in
vulnerabilities.
But that's not all.
With a genetic capabilities, we can start also looking at
automating this whole remediation process in a way that we
couldn't do before.
And that's my dream because if we do this, we
can actually reverse this decade long trend of increasing security
debt and we can see this start burning down security
debt.
And this is what I like to call the ABSEC
renaissance, which is what we here are trying to drive
between Microsoft and GitHub advanced security.
And This is why I believe in this vision, our
vision is pretty simple.
I believe that if if this security becomes part of
the platform in the way that software is getting created,
we can come to expect a new world where every
piece of software is going to be secure and high
quality by default.
And doesn't matter if it was written by a developer
or if it was written by AI.
And by the way, the first part about that agentic
prevention is not something that we're building.
It's something that we have already announced.
Like I don't know if it was two or three
weeks ago, like a GitHub universe in the own compiler
coding agent loop.
Now compiler coding agent is using all of the AI
capabilities that I or all the code security capabilities that
I have in the product today under GitHub Advanced security
to make sure that we fix dependencies, we fix vulnerabilities
in the in the code and we make sure that
no secrets ever leak out of that system automatically.
And the agent is pretty good at doing this.
OK, so here's an example.
Copper coding agent runs the code QL against the the
XAML file that it just created and it flags that,
hey, you didn't set any permissions.
You're you're being over permissive about what you're where you're
defining this workflow and compiler coding agent says, oh amazing.
Thanks for letting me know.
Let me figure out what kind of permissions this file
requires updates the permissions and runs it again to make
sure that yeah, I fixed the problem.
But today we're going to be focusing on the second
part of that story, which is how do we accelerate
remediation and how do we start burning down those decade
long vulnerability backlogs?
But should we fix everything?
Let's say we have two decades of vulnerability sitting in
our backlog.
Should we fix everything?
That would be irresponsible, wouldn't it?
Imagine the amount of change that would have to come
into the system and the likelihood that some of those
changes are going to create problems.
So I still believe that we need to prioritize vulnerabilities
and understand which ones really matter to me today.
And that's why we're announcing this integration here between Microsoft
Defender and GitHub Advanced Security.
Because every conversation that I had with application security leaders
always boiled down to the same discussion.
How do you prioritize?
What would you like to use at the dimension of
risk to prioritize vulnerabilities?
And of course, they mention severity and all those things
and EPSS, etcetera.
But it always comes down to I want to know
if this is running somewhere, is it in production?
And if it is in production, what is the level
of exposure if this gets compromised?
Does it have, is it Internet facing?
Is it exposed to the, what kind of data does
they have access to?
So if there is a compromise on the application, what's
the impact?
And once I prioritize, now I can go and scale
remediation.
And that's what we want to show you guys today.
And we're so excited to have this line.
Music.
All right.
And to share more about the security operations perspective and
how we're thinking about this from the security operations standpoint.
Oswald there.
Thanks.
So we're going to mention the concept of application and
I want to break it down into two-part that actually
represent what is application for us.
On the one hand, we have the first party code
as we call it.
This is actually called created by your team, OK, you
own it, you create this project.
Then there are a third party code in the industry,
we call it the software supply chain code contributed from
the external sources such as OSS.
But you need to scan all of those code parts
and you need to protect it.
On the one hand you have the dev team, they
are looking more on the left side and their visibility
stop just before runtime.
They don't know how this code is actually running and
what is happening in runtime.
And on the the other side is the security team.
They have visibility in runtime, but their visibility to the
left and somewhere far from the true COD owner.
Now let's talk about pain.
Whenever I'm going to to do some innovation, I always
ask myself, what is the problem that I'm going to
solve?
Like the problem space is so important.
And we did this again, this exercise here as we
approach to this project, the Secco persona, and I'm going
in the next few minutes to talk about the Secco
persona.
He has a couple of challenges.
First of all, huge amount of recommendation and Marcelo touched
that we cannot handle everything that we have in the
system.
Our team is too small and there are many of
them.
But even once we decided, OK, this is super critical
and I want to fix it, how do I go
and fix it?
Who do I talk to?
The linkage between what I see and own and the
developer will actually contribute.
Those line is broken.
Those two teams are not well connected today, so I
need to find the right owner.
And once I think I found the right owner and
this what usually happen, I send it out to those
teams and then customer call this the the black box
experience.
They send out the ticket and they're waiting and then
something happened there, but they don't know usually what happened.
The ticket will travel between different team members until it
finally get to the right hand of the true developer
who own it and can fix it during that time.
The second persona, the poor guy, it gets call every
day for me.
See.
So what is the status?
What is happening?
Give me ETA, You have no clue.
We're going to fix all of that.
So let's take a look how the product works and
how the SEC OPS persona is going to see it.
Speaking about seeing it, I will put my glass.
OK.
What we have here is the Defender portal and you
have here the security posture, the the runtime environment and
all the insights.
And I have 97 critical issue reported by the system.
I want to go and investigate.
My default go to is the recommendation list which is
huge.
As I said I cannot go and handle a 96
critical item.
I want to filter that list in a proper way
and there is a very easy way for me to
go and do that and reduce the signal to noise.
If I switch to attack path view.
Now I'm getting to see the toxic combination between vulnerable
resources that represent a potential attack path.
I will filter that list to make it a bit
more concrete by looking at risk factor.
In this case, let's choose Internet Expose and Access to
sensitive data.
I can also add the type of entry point.
I will choose Container.
And to make it interesting, let's choose a target, in
this case storage account.
So now I have my list.
It's shorter than before and very focused for me, relevant
to the job that I can carry.
I will choose the first one.
Here we see the attack path.
Let's analyse what we see.
Here we have external access.
Let's see.
Let's see what this guy is like.
It's a load balancer We can see, hopefully you can
see that it's a public exposed public IP.
Yeah OK.
So it's public exposed IP that lead eventually to the
crown jewel that you see here.
There are additional entry point and I want to get
rid of those potential attack path in one click.
I will search for what we call chalk point.
This is the point right here that if I treat
this properly, the attacker, it actually stop here.
It cannot continue and go all the way down to
my crown jewel.
When I say crown jewel, let's examine the reason it
was marked with this nice icon we have here.
Interesting finding a lot of PII information and financial records
that are stored in this storage account.
So it's definitely sensitive.
OK, I'm ready to go and take a closer look
on the container and the findings that I have here.
The list is pretty big and all of this is
coming from our scan in runtime and registry scan that
we do in the Defender for Cloud.
I would choose one of them.
There are associated CV to this one.
Let's click on it.
We have 4 CVS related to this recommendation and this
is something new.
The screen is is not new, but this section is
new.
You have your related GitHub alerts.
If those CV ES are linked to the GitHub system,
I would see a link and I know that it's
already marked and someone is assigned to this issue, but
it's not.
So I want to create this linkage between what I
see as a SEC OPS persona in the defender portal
and the git up system.
So I have here this tab Remediate remediation insight.
Isn't it nice?
Now we finally see the linkage between code to cloud
all the way.
We created the mapping that goes from the code itself
in the repo through the build, ship, all the pipeline
until runtime.
This linkage helped me to map the most relevant repo
and owner of those issues that I just discovered.
There is another thing that is very interesting.
Since this data live on a graph and everything is
connected in the graph, I can link those vulnerabilities to
all the impacted resources.
So I know that there are two and if I
will fix those, I'm actually fixing it for those two.
Now let's take action and I have here in the
list create GitHub issue.
I click it and the magic is done.
On the back end, the system switch to the GitHub
side and move the ticket over to GitHub.
With all the information that we just saw.
We see that there is a sensitive data.
The context from runtime is now part of what we
see on GitHub.
It's exposed to the Internet everything I filter by.
And most importantly, the exact 4 CV that we looked
at are now here, bucket together and ready to be
picked.
OK now my job is not done.
I want to click into the repo and take closer
look on what inside the container.
I'm going to see the the application itself inside the
container and the finding on that application.
Thanks to the finding that came from GitHub Advanced Security,
I have a very interesting and critical issue to handle.
The code scanning discover a couple of issues and I
can see those issue here in the finding.
It's pretty big list and that list is already linked
to the GitHub side and I can click and move
again to GitHub to see the information and handle the
case in GitHub.
So we covered the perspective of the Secop persona and
I want to go back to the presentation and call
back Marcelo to take us to the dev persona.
Thanks.
Good job.
All right, back to my demo now.
One thing that I so thanks Oz for covering the
details of the SEC OPS persona.
Let's first take a quick look at the Absec perspective.
Like again, Oz talked about some of the pain Absec
also has this pain like they, they really want to
prioritize and focus the energy of developers on the things
that mattered most.
But they lacked A context until today.
They didn't have this contextual details about what is the
runtime information that can be used to now focus developers
on the most critical vulnerabilities.
And let's see also how we can address the remediation
speed.
OK, so I'm going to switch to this demo very
quick and no surprise.
Now all that information that was available only to the
security operations person is also available within GitHub.
Have you guys all used GitHub before?
Are you familiar with the GitHub experience?
OK cool.
So this is a new virtual registry capability that actually
takes advantage of our actions pipelines where we can very,
very easily sign in attest artifacts that are moving through
the pipeline.
So I see everything that is coming out and here
we see the Zava web shop that showed on running
in production and we see that there is only one
version that is actually deployed to the cloud.
And we already see some of those parameters that Oz
was speaking about.
Let's take a look and see how it actually works.
So when I look internally, I see this provenance attestation,
which is cryptographic proof of how exactly this artifact was
built, including the commit hash that was produced, the workflow
file that was used for the build.
But most importantly to us is the fact that includes
all the repositories that are included in this container.
So this is actually a mini to mini mapping of
all these repositories going into artifacts and then allows, it's
almost like a basically an air tag that allows us
to trace the code all the way into an artifact
all the way to production and back.
OK.
And thanks to MDC, now I see that this was
actually pushed to the which the Azure container registry that
is deployed to production has sensitive data is Internet facing
and it's deployed to the East US region.
But why do I care about this from a knapsack
perspective?
Because again, my goal is focus and remediation.
So as a knapsack person on GitHub, I would typically
live here in the risk on the security tab right
of GitHub.
And here I see that I have 114 findings that
were related to 1st party called vulnerabilities.
I have also the dependencies that Oz spoke about like
in in the context of dependabot finding CVS.
But what we want is we want to drive remediation
with developers.
And what GitHub has always been is a collaboration platform
for developers to collaborate with other developers on top of
their code.
And this platform has been now extended to also include
collaboration between developers and agents and also developers and Knapsack.
And one of the constructs that we created for that
collaboration is this notion of campaigns.
And from within a campaign, I can very easily filter
the information that is most critical to me.
So of course, like, let's start with the obvious one.
I just interested on the open vulnerabilities.
A typical application security person would look at the severity.
Let's start with the criticals and highs.
And then there is also now this runtime risk data
that came straight from MDC to me.
And from here I can look at that Internet exposed
and sensitive data.
So I went from 114 down to 9.
What's most interesting here is I can publish this campaign.
I can give it a name.
Let's pick my demo short description.
OK?
How much time should we give our developers to fix
this 9 vulnerabilities?
I think it's Sprint ends next week.
Let's give them until the end of next week.
OK?
And I published the campaign and all of a sudden,
this campaign is available to me.
And as a knapsack manager, I can go and I
can monitor the progress of this campaign.
I can see how many have been closed, how many
have been remediated, etcetera.
But let's look at the more interesting side, which is
the developer side of the house.
I'm going to put my developer head on and I'm
going to go into my actual repo where these vulnerabilities
live.
And as a developer looking at this repo, I'm going
to see a campaign that it got published by my
APSEC team show up into my right into my repo
where I work and I see every day.
But instead of seeing a list of tasks, what I'm
looking at is a list of suggestions about how should
you fix this problem.
And of course, because we have copilot behind the scenes
here, like we have a copilot auto fix.
Let's take a look at one of those.
Let's take a look at the code injection vulnerability.
So for a developer who hasn't seen or understood the
concept of our code injection is it provides a very
good description of what the issue is, why it's a
problem using the actual code that the developer was using
and provides a very simple suggestion that, hey, you shouldn't
be deserializing data using an eval statement on a JavaScript
file.
You should actually be using the Jason parse, which is
a standard.
OK, but remember that this is not just a matter
of accepting this.
I need to make sure that this complies with my
coding guidelines and what my organization expects.
So do I want to fix this one by 1:00?
What if we decided, no, I don't want to take
this one at a time like let me pick this.
I like this or critical vulnerabilities and control command line
looks dangerous as well reflected cross site scripting.
And instead of trying to fix and accept every one
of those suggestions one at a time, let me just
assign this to copilot.
And what it did is it actually assigned copilot to
apply the auto fixes for selected alerts and copilot will
open up PR.
Let's take a look at the pool requests.
And indeed, copilot started working on a work in progress
pool request here that is iterating on the the solution.
It recognized that there are 5 vulnerabilities that were assigned
to, to to him.
And over time you're going to be able to watch
the session.
You can steer the session, you can guide copilot in
different directions the way that you want, or you can
go get a coffee and come back later, which is
what we, we're going to simulate here.
OK, so let's take a look at the this PR
which I have already created because this is going to
take a little bit of time, about four to 5
minutes and we don't have the time today.
So let's look at what Copilot did indeed or what
it does.
So here's the PR and I can watch the entire
session about everything the copilot did to remediate this problem.
And here you can see that there are actually a
summary with it was able to fix all the file
vulnerabilities, verify them and make it ready for me as
a developer to now push it to production.
And if you watch the only two files that were
impacted, the package Jason, we escape the escape HTML dependency
so that we can actually fix the cross site scripting
dependency that is down here.
The the potential for command injection for unsafe execution is
available here with the exact file.
So everything got resolved, every potential conflict got resolved automatically
by copilot and verified by the agent.
Isn't that cool?
Imagine the scale that it can reach now burning down
those decades of backlog of vulnerabilities by focusing on what
matters and now addressing and remediating this scale.
And as a developer, all I need to do is
go back to the pull request.
If I'm happy with the solution, I can add my
review, approve and and get ready to merge.
But remember that actually, as you found something right on
your side of the demo that was not covered by
Dependabot right?
So that was a that was that basically a dependency
that was caught at the container scan and and GitHub
Advanced Security doesn't have a container scanner today, but MVC
does.
And when you scan the container, you found something on
the image that we didn't catch on the application packages.
And what can I do here?
Like OS created this ticket for me, it's a new
issue.
I can go and assign this the same way that
I did before and assign it to a compilot and
compiler is going to do all this work for me.
And look, I can jump straight into the summary.
Let's go and view the entire session again.
It did a lot of work up front because busy
box is not something that you can find on your
container description file.
It's actually part of the base image.
So we went through trying to analyze what exactly happened
where this busy box is and identify that this was
actually part of the base image of the application.
And as a summary it was it decided hey this
version is vulnerable.
I understand that I need to get busy box to
135.0.
It figured out the base image version that would lead
to being able to solve those 4 vulnerabilities that I
was found.
Did some application testing to verify that it was compatible
and still runs with this new version of the image
tested.
Try to execute against the HTTP server, issue an HTTP
request, make sure that it came up.
Did all the work for me.
So now I come back from my coffee, I can
just say OK, great, this looks good.
I can see the test results.
I can go back and just say, go back to
my pull request and as a developer add my review
and approve this.
OK?
This is extremely powerful guys.
Like this is really short circuiting the way that we
think about remediation to a level that we it just
wasn't possible before.
And to speak a little bit about this from the
point of view of a customer who's been using this
for a while, I wanted to invite Manuel here to
the stage to speak from the entity data perspective.
Thank you.
Thank you so much Marcelo.
Yeah, it's as one of the world's leading the IT
service company.
We're constrained constantly inside on innovate security right Today we
want to share how we were exploring securely an adoption
Microsoft NI approach secure code to the cloud with AI
infused that's a cops right.
So the question we all face how do we deliver
it faster without compromising security, right.
So like many organizations, we encountered the challenge of balancing
innovation or security.
Traditional development cycles left gaps, vulnerability, lack of traceability, a
manual process slowing delivery, right?
So we need an A solution that integrates security simply
from code to cloud.
When Microsoft invite to NTT Data used to be the
server customer for this platform, we accepted, we tested it's
in real project and here's what we experience, right?
So the first is the native integration with the GitHub
and Azure for frictional development experience, AI driving vulnerability analysis
in real time, ultimate compliance and security policies.
Finally, the first definitely session is a global excalability for
enterprise grade delivery.
So throughout this approach is we have seen improvement in
vulnerabilities, remediation speed and the secure delivery or critical application.
And this enable use to innovate with create confidence.
So this is not just technology, it's a cool to
achieve security a part of the NDA on on development
and we are proud to drive this information alongside to
Microsoft right.
So if you want to accelerate the innovation without compromise
security now, this is the time.
So welcome to the future of the Depths and Cops.
Thank you.
So I want to start to close this session and
this nice market texture actually tell the story.
I will not read everything, but to summarize it, we
have the two main product serving the SEC OPS team,
serving the dev team, interacting with each other, creating context
and creating a streamlined process across them.
But we never stopped there.
We gave you the agent capability to fix the code
for you.
And going back to the beginning, there was a dilemma,
speed versus security.
Not anymore.
Thank you.
Modern development moves fast. Security teams are overwhelmed with alerts. But not all risks are equal.ย GitHub Advanced Security and Microsoft Defender for Cloud make DevSecOps seamless by connecting code to runtime context and unifying developer and security admin tools. Learn how to prioritize whatโs actually exploitable in production, reduce alert fatigue, and accelerate remediation with AI-powered fixes with agentic workflows. To learn more, please check out these resources: * https://aka.ms/ignite25-plans-AgenticDevOpsGitHubCopilot * https://github.com/security/advanced-security?utm_source=brk112-ghas-cta&utm_medium=web&utm_campaign=ghignite25 * https://github.com/solutions/use-case/devsecops?utm_source=brk112-gh-devops&utm_medium=web&utm_campaign=ghignite25 * https://github.com/enterprise?utm_source=brk112-enterprise-cta&utm_medium=web&utm_campaign=ghignite25 ๐ฆ๐ฝ๐ฒ๐ฎ๐ธ๐ฒ๐ฟ๐: * Oz Wilder * Charlie Doubek * Marcelo Oliveira * Manuel Sanchez Rodriguez ๐ฆ๐ฒ๐๐๐ถ๐ผ๐ป ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป: This is one of many sessions from the Microsoft Ignite 2025 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com BRK112 | English (US) | Innovate with Azure AI apps and agents, Microsoft Defender for Cloud, GitHub Breakout | Intermediate (200) Related Sessions: BRK108 -- https://ignite.microsoft.com/sessions/BRK108?wt.mc_id=yt_ THR819 -- https://ignite.microsoft.com/sessions/THR819?wt.mc_id=yt_ #MSIgnite, #InnovatewithAzureAIappsandagents Chapters: 0:00 - Importance of Developer Collaboration in Security 00:05:25 - Rapid Code Growth through AI and Vulnerability Concerns 00:08:03 - Introduction of vision for secure-by-default software 00:09:31 - Focus shift to accelerating vulnerability remediation 00:10:10 - Announcement of Microsoft Defender and GitHub Advanced Security integration 00:18:10 - Analyzing container scan results in Defender for Cloud 00:22:03 - AbSec team leveraging runtime context for prioritization 00:28:08 - Assigning multiple vulnerabilities to Copilot for automatic remediation 00:29:44 - Scaling vulnerability remediation across codebases automatically