Loading video player...
[Music]
Please welcome Amanda Silver.
[Music]
Hello world.
I'm Amanda Silver. I'm delighted to be
here today on this beautiful day in
sunny San Francisco at the GitHub
Universe Conference, the conference that
is by and for developers. So today we're
going to build an intelligent app from
scratch and deploy it to production. And
I really just want to kind of jump in
really quickly. In 2021, we launched
Copilot as the world's first AI pair
programmer. And last year at Universe,
we added model choice and we used the
word agent for the first time. And we've
been really pushing the frontier of AI
across multiple modalities, advancing
the state-of-the-art and machine
learning to make co-pilot more helpful
and more intuitive for you and more
powerful. And what we're seeing is that
in agent mode with GitHub code copilot
coding agent, it's really driving up
engagement and productivity. And in
fact, 80% of new developers on GitHub
now use copilot within their first week,
showing that AI is becoming an
absolutely essential part of the
developer experience. Developers are
using Copilot right in their code editor
so that they can stay in the flow. And
our goal is to make it so that Visual
Studio Code becomes the AI native code
editor. And earlier this year, we open
sourced the Copilot chat extension. And
today, you saw what's next. The the
agent session views bringing you custom
local and remote agents, codeex agents,
and planning mode. And all of these
tools together are making GitHub copilot
essential for building cloudnative
applications. So with that, let's build
an app.
Modern apps are cloudnative. They're
agentic. They're self-improving. And
they're kept low maintenance by specs
that define, generate, verify, and
operate the systems. Specs become the
single source of truth that agents,
pipelines, and platforms follow. Turning
your strategy and your implementation
recommendations into repeatable,
auditable automation. I really love this
quote from Andrea Griffith from GitHub.
Give a dev to code a code completion and
they'll merge once. Teach a dev to wield
an AI coding agent and they'll empty the
backlog before the coffee cools. So, I'd
like to welcome Shane Boyer on stage to
show us how he uses Spec Kit and all of
the MCP tools that we're bringing to our
experience. Shane.
[Music]
All right, let's get to it.
>> Let's get to it. Now, the one thing that
has to work is my fingerprint.
Okay, that worked. Great. So, here we've
got our Octopets app. It's our new AI
native version of Octopets that we've
been working on. And what we have here
is we've got a great kind of AI native
uh very clean app that we've been
working on here. And we've got some cool
little features. We've got our great
little rabbit milk.
>> He had to do that. That's my bunny.
>> You're welcome. All right. So what we
used in order to develop this app is as
you mentioned some spec kit driven
development or specification and this is
our opensource project we've been
working on with GitHub here to do some
spec driven development instead of just
kind of vibe coding our way to hoping
we're getting something uh that we're
looking for uh in an app. So let's jump
over into VS Code and see how we did
this. Now, SpecKit is a great way for us
to just kind of define what we want to
build in an app or a feature. And one of
the latest features we have here is when
we run the spec kit, we get these great
little quick clicks to get what we need.
Now, what I want to start with here is
the constitution is kind of our
governing principles around what we want
to build into our app and these guiding
uh guidelines for what we're building.
>> So, they're kind of like your best
practices, but then you can also share
them with the team. Yeah, I love it
because it's kind of a template, but it
also kind of sets again the principles
of what we're trying to build without
having to put a lot of stuff into our
prompt, right? So, what I want to make
sure is that we're going to we are going
to be deploying this to Azure
eventually. So, I'm setting some just
some some guidelines here. So, right out
of the gate here, you'll notice that I
am going to be using the Azure CLI. I'm
going to be using the Azure MCP and
making sure that I'm getting those best
practices for codegen around those
components that I might be using when
the prompt gets put into uh spec kit.
I'm also making sure that if I have any
agent driven stuff that's going on, any
models that I want to be using that I'm
using the Azure AI toolkit MCP tools and
I'm getting those best codegens, too.
>> Now, SpecKit isn't specific to Azure.
No, if I was using a different cloud, I
could actually specify what I wanted to
do.
>> Sure. You can just if you have any MCV
tools that you want to use, I can just
put those right here in my constitution
file and making sure that when I run my
prompt, those will be invoked. And I'm
going to put some other things in here
too. I'm going to be using Aspires,
other principles that I want to make
sure get invoked. So when I run the
actual uh specify command with my prompt
of I want this Airbnb style app for
octipets, uh make sure that you're
following these things there. Now, we
did that already. Obviously, we've got
some time constraints. We want to make
sure we do this fast that we are already
going to get our user stories generated
by the specify uh command which is
great. So this is really cool. Make sure
it generates all of our user stories
that we're looking for based on our
prompt of give me an Airbnb style app
for our pets.
>> Really great.
>> So it kind of expanded from your your
single prompt into user stories and then
you're working from there.
>> Yeah. I mean, I could go into a sandwich
shop and say, "Give me a sub."
>> And hope I'm going to get a sub. Or I
can say, "Here's all the things that I
want." It's going to give me much better
result. Right? Same idea.
>> Now, the next one I can do is I'm going
to go ahead and do a plan right here
where when I do the plan, I could
actually put in I want to use the Azure
MCP. I want to put in Aspire or maybe I
want to use React and put that in here
too. But since I've already put that in
there, I could just say follow my
instructions that I put in the
constitution, it give me the results,
right? And that's exactly what it did.
So when I look at the actual uh plan
that gets run here, let me go over and
look at those results here. We did our
plan summary. You can see that we go to
the top. It planned out everything. Gave
us our feature overview, the technical
content. It did have a couple of things
that it needed some key decisions on. We
put that in there, which is really good.
and it generated all the files that we
needed as far as our plan of research.
Now, what was interesting here is when
we look at the research file and if I
can just open the preview here, let's
just make this a little bigger. We can
see that it did actually follow uh these
MCP tools that we were looking at and it
gave us our model recommendations
because we know we're going to be using
agents for maybe a chat feature or some
other recommendation things and it says
look use GPT4 mini which is great. uh it
gave us some options around maybe what
we're going to do if we're going to put
it into Azure AI foundry for production
and also gave us some alternative
considerations. So it did invoke those
MCP tools as we expected uh as we
generated our plan.
>> Now the final thing that we want to do
is now turn that plan that research and
that spec into some individual tasks. So
when I run the task mode, it'll take all
those things into consideration and
generate the individual steps that we
need to actually run for specified to
do. Right? So when we run the tasks,
it'll say, look, I'm going to read all
the things that we've generated as a
part of our prompts, which is really
good. And then we scroll down here and
it's broken up individual phases that we
have going here. Now, if I want to look
at the actual tasks that were built,
again, it's just a lot of individual
steps of everything has to do. Here's my
front end and all that.
>> This is
>> that's a lot of manual tedious work that
you have to do to do all that work,
right?
>> Yeah.
>> So, what I actually did is I took
advantage of another MCP, the GitHub
MCP,
>> right? And I said, "Hey, friend, can you
create issues in my repo for each one of
these tasks?" This invoked my GitHub MCP
which I if I look at my GitHub
task here, it created issues in my repo
for each one which is really cool. The
better part of this is is I can actually
go over here and I can assign this to my
coding agent. It's going to kick off
that create backend for me while maybe
I'm doing some frontend stuff. I know
I'm not going to do it. I'm just going
to assign it to another agent to do that
work as well. And then we can see those
things being created. is going to check
them off one by one as I'm moving
forward. So, I'm taking advantage of
MCPs. I'm staying inside of VS Code and
having all that do the work for me.
>> Awesome, Shane. Thank you so much.
>> All right,
>> cool. All right, so Speckit brings spec
driven development to coding agents with
tools like GitHub Copilot. Even it also
supports Cloud Code and Gemini CLI. So,
even if you're not using GitHub Copilot,
you could also use it with those uh
tools as well. And it what it does is it
turns the specs into these living
executable artifacts that guide and
evolve and break down all of the work
just as Shane just showed you. And he
also showed you the Azure MCP server
which we're actually announcing today is
GA. Um, it securely connects Azure
services with AI tools like GitHub
Copilot, giving you that real time data
about your resources, allows you to do
the resource management, do
infrastructure as code, and all of the
troubleshooting all through prompts in
your IDE. And it makes it much easier to
build and manage and deploy your
cloudnative applications on Azure
without having to hunt and peck through
the portal.
So, we've got our app up and running as
we just showed you. Now, let's turn to
adding intelligence and automation to
it. You know, AI agents aren't just
another application. They really play by
different rules. Traditional apps are
built on fixed logic and they're updated
manually. They follow really rigid
predefined steps and control flow and
you have to manually describe the
infrastructure. AI agents work on
probabilistic models, not static control
flow. They reason, they plan, they adapt
as they go. They are adaptive by design
and you focus on the intent and what you
build is the signal loops and the guard
rails not just the endpoints. So to show
you what I mean
I'm going to invite Ranglu on stage and
you know for all of you you might have
been using agents for months now in your
development flow but you're not
necessarily familiar with how to build
agents quickly. So that's what we're
going to show you. So, welcome Ronlu.
[Music]
[Applause]
[Music]
Hello everyone. All right, let's build
an AI agent.
>> Let's do it.
>> All right, the first thing I'm going to
need is a great model because models are
the foundation for powerful agents. I
can come over here to the AI toolkit
model catalog to explore different
models including those that we can
deploy to the AI foundry service uh with
a single click and or we could download
a local model with a single click as
well. Either way, I can then hop over
here to the model playground to test out
each model's capabilities. By this
point, I still don't know which model
would be great for my agent. So, I'm I'm
going to come here and ask Copilot for
advice.
I'm going to say which foundry models do
you recommend for a sitter agent that I
want to build that can suggest pet
sitters in the area. So we see here
co-pilot is calling the AI toolkit tool
for the latest model information. Um and
now considering the agent that I want to
build uh co-pilot is giving me a few
model options with very detailed
information about its cost um context
window and use cases each model is for.
So with that information, I can make a
model choice really, really fast. So
sounds like the top model C-pilot is
recommending SGPD41. So I think we're
good to give that a try.
Now let's build an agent. And again, I'm
prompting Copilot to help me with that
to create an agent.
>> So you might have never have built an
agent before, right? Right. And this is
actually going to allow you to just
input that as a prompt and then
>> yeah create a fullfledged working agent
right here.
>> All right, let's see.
>> Um yeah, this is again using the air
tool kits tool and uh we can take a look
at this code that uh copilot generated
for us. Uh we are using
the new Microsoft agent framework that
allows us to write single agent or
orchestrate multi- aents really really
easily. Uh down here we can see the
definition of my agent.
Right here is my um system instructions
and a list of tools that copilot created
for me. I even asked Copilot to turn on
uh to enable tracing for my uh local
runs so we can observe how exactly our
agent is performing. So with that, let's
turn on a local trace collector.
All right.
All right. Let me move this up so we can
uh run this agent.
>> Your mouse. There you go.
>> And test out.
>> All right. Um or or without tracing. Uh
but we can at least test out. Yeah.
>> There you go.
>> So our agent uh is running. So let's
look for a dog sitter um in San
Francisco area. And what's happening
here is our agent is making a call to
the model endpoint that I have in
Foundry and uh it's going to look for
the right tool to call and then um come
back with the uh right information about
our uh sitters in the area. Great. And
in the meantime, we just received um a
trace that was collected as part of our
run. And we can see the model uh model
call to the GPD40 model, a 41 model and
tool call as expected. Awesome. So now
we have a single agent created and I did
the same thing for another agent for
recommending pet venues. Now I wanted to
scale up. I want to orchestrate both
agents into a workflow. So again I come
back here to co-pilot
um to help me with that. So, Copilot can
not only can create single agents but
can orchestrate multi- aents as well.
Get to the top of this. Um, and this is
the prompt I sent to copilot. And, uh,
let me get this um, visualizer
up running.
All right. Um let's get the uh
workflow started.
Now we are starting a listing agent and
a set agent both on my local machine. Um
and then workflow is being kicked off.
Uh we're testing a few user queries.
We're seeing each agent uh sometimes
both agents are going to be invoked as
part of this workflow. Got it. And if we
take a look at this code generated by
co-pilot, it is using the Microsoft
agent framework as well. But this time
instead of a single agent it is using a
workflow builder so we can orchestrate
multiple agents together. All right. So
now we have vibe checked a few user
queries. Look things are looking good.
Um but we want to do a more broader
assessment of the quality of agent
responses. So it might be a good idea to
put an evaluation in place.
>> Yeah. So evaluations are kind of like
unit tests but for stochastic processes.
Right. Right. So that I could actually
kind of check uh the performance kind of
to see if the agent is doing what I
expect it to do.
>> Right. Right. Exactly. So again coming
back to copilot you see where this is
going. Everything I'm relying heavily on
copilot to do the job for me. Um I I
simply said add evaluation to my agent
and that is all of my prompts.
>> So you don't need to know anything super
specific about the evaluation framework.
>> Yeah. And I don't have a metrics
defined. I don't have a test data set,
but that's fine because copilot is going
to guide me through that whole process.
It um it looked at everything in my
workspace and suggested metrics I can
use and even went ahead to generate test
data for me which we can view right here
in data wrangler. Um this is a list of
user queries that's generated by copilot
that is relevant to my agent use case.
So with that um data and this Python
script that Copilot generated for me
powered by the Azure AI evaluation SDK,
I can now run evaluations offline on my
local machine or I can integrate this
with GitHub actions. So I can have
evaluation run triggered
um as part of my um CI/CD pipeline on
every commit. So I can see my results
here. All right, with all that work
done, evaluation in place, I have
confidence to check in my changes.
Finally, let's see that in action and
let's test this out in our application.
See if everything works. All right, I'll
come over to the chat and I am going to
ask for a cafe and a sitter uh
recommendation in the San Francisco area
for next week since we're here. And I do
have a budget limit of around $30. So we
see if the team of agents uh behind this
chat experience is going to give us good
recommendations.
This kicks off a workflow that involves
two agents uh calling model endpoints.
>> All right, so we got a response back.
Looks like there's a number of good pet
friendly cafes in the area. Um and uh
our agent is also telling us there are
no sitters available under my budget. So
I guess we have to bump that up to a
higher number. All right.
>> And uh Yeah. Great. So, our agent is
able to find someone who charges $40 an
hour. And now I have a sitter. Perfect.
All right.
>> Yes. I want to show you.
>> Thank you, Ron. Thank you.
All right. So, what she just showed you
was the AI toolkit for VS Code, and it
really helps you do that generative AI
development. It helps you embed models
um and workflows directly into your
developer inner loop. And so you can
discover and explore local or remote
models. You can author and evaluate
agents and multi- aent workflows right
in your IDE and then integrate them
seamlessly into your AI applications and
ultimately deploy them to Azure and AI
foundry with a unified VS code and
GitHub copilot experience. Now, she also
showed you there the agent framework,
uh, an open source toolkit to build,
wire up, and level up AI agents. And it
really blends the best of semantic
kernel and autogen into one flexible
runtime. So, you can prototype locally
and experiment across the clouds without
having to rewrite everything.
Now, today we are announcing a public
preview of the GitHub copilot prompt
first agent development. Um, so you
don't need to actually know a ton about
all of these new frameworks to be able
to build them in your application. Um,
and then she also showed you the Azure
AI agent evaluation and the Azure AI
evaluation with the GitHub action. And
so that allows you to both test your AI
agents uh locally as you're doing your
local development in your inner loop,
but it also allows you to actually
deploy them into your C CI/CD loops as
well, so that you can catch those issues
and improve the quality before deploying
to production and so on. And so that
supports both single agent and
multi-agent comparisons. and it's really
easy to install directly from the GitHub
marketplace. So,
somebody's super excited about that in
the audience. So, the app is live, which
is great, but most incidents still take
hours to resolve that. It really slows
your recovery. It slows your time to
remediation and it ties up your S surres
with these really routine fixes. And so,
today a ton of operational issues
require an on call engineer to spend
hours investigating and remediating a
single incident and that slows down your
time. It slows down your MTR. So, we've
spent a lot of time listening to SRRES
and training our own and we understand
the pressure that they're under. It's a
really high stakes, high pressure
situation where your site is down,
there's a lot of pressure on you and how
do you actually resolve it? You know,
you have to reduce your overall town
time. you have to improve your time to
remediation and ultimately we know that
it kind of is not awesome to be a dev
that's on call that gets woken up in the
middle of the night to deal with a live
site incident. So what we're looking for
is a solution that gives you less toil,
fewer 2 am alerts, and really more time
to focus on being a meaningful engineer.
What if you actually had a super
knowledgeable IT person who never
sleeps, is available 247, can instantly
analyze thousands of system metrics and
logs, can automatically fix the common
problems that you have in your solutions
without sometimes needing human
intervention. And so what I'd like to
show you is to uh bring Shane back on to
show us how he's using the Azure S agent
uh as a really super knowledgeable
Agentic partner. Shane
>> Let's do it.
>> All right. Who likes to get woken up in
the middle of the night to fix
something?
One person.
>> Night owl.
>> Night owl. All right. I want your phone
number. I'll call you. All right. So
here we've got our S sur agent. We've
deployed our octopets app. Ron came out
and built some agents that implement all
that stuff. So now we have to make sure
that if it breaks, we don't call him.
>> All right. So we've got we've got this
all set up here. And this this works a
lot like a co-pilot agent, right? I can
chat with it. I can set up questions
say, "Hey, please monitor our app, our
health endpoints." and it will set up an
agent for us to do that here. So, we can
manage our incidents uh in our any of
our current uh platforms. We've got
pager duty, Azure monitor, service now,
whatever you're working for uh in your
system like for that. So, we can set
that up. We can set up response plans
and I've set up my octtopets response
plan of course, right? So, we can set it
up for all incident types. Um I'll hit
next.
So when we first started building this,
we built it for our own S SRRES that are
working on our Azure services and what
we did is we took the the uh the
guidance that we had for those S SRRES
and we kind of codified it into an agent
and that's basically what you see here
is is kind of the the realization of
that.
>> Correct. Correct. So we'll look at our
plan again. We had to hit a little
refresh. No problems. Next here and
we'll hit skip. So here's our
instructions.
that we're going to put in our e our
execution plan. So basically, if if you
or I got called and something went
wrong, what are we going to do? We're
going to follow a set of steps to see
what happened. Check this, check our
scaling, make sure the network cables
plugged in, stuff like that. So if you
can type your execution plan out in a
runbook, you can basically set up these
agents. And that's what we've done here.
>> So I'm just going to scroll through
somewhat quickly because the most
important part is here. Hey, maybe we
can check these particular commands. See
if we've got some CPU or memory issues
that we've got going on. You know, I'm
going to tell my agent, my actual person
agent,
>> go check this out.
>> Your actual
>> actual SR, did you check did you check
for 500s? Things like that. If you find
some drift, check these here. Check our
bicep configuration. Right
>> in the end, let's go ahead and create a
GitHub issue that we can go and handle
ourselves later on. maybe engineers will
have to go look at that. So that's
exactly what we've done here. We've set
that up on how we respond to those
things. Now the good thing is is our SR
agent knows all about our resources.
We've tied it to what's been deployed.
It knows about the Azure resource
groups. It actually knows about what our
where our code is. So it can scan all
that. Has a tons of knowledge to use in
order to find out what's happened. So if
I look at our incident management here,
look at that. Crazy enough, we have a
ton of things going on here. Looks like
it's triggered some 500s. Let's expand
this page here. And this is kind of an
audit of everything that's happened uh
that as it's been checking for what are
we actually looking for in our runbook,
checking our CPU.
>> It's kind of like an incident management
log basically doing kind of the first
line of investigation, but it's also
interactive. You can you can ask it
questions
>> and it's not doing anything magical. We
don't want magic happening here. It's
actually logging all of this exactly
where we want to go and look what did it
try to do what did it find it says look
we're finding out of memory exceptions
right it's trying to do some things that
we asked it to do try and set up um look
at our subscription try and do an
automated resolution by increasing our
CPU if it can or cannot do that it'll
continue to log this and what we wanted
to do is respond as we said to do can
you go please log an issue for us so we
can go look at it well cool look it
found this now it's going to do this log
on behalf of K who happens to be our
actual SR person here, but it tags it
with a nice little S sur agent created.
So we know this was in fact automated.
So this is this is great. So now it's
going to give us what did it find?
>> What's some of the errors that we've got
going on here? It's nice and detailed uh
more than probably my agent might type
in like there's a problem. So we have
good some proposed code fixes here, some
IA drift which is great. So, what I'm
going to go do is go look at this issue,
some other issues that we've got going
on. And what I might want to do, what
would you do? I would assign this to
co-pilot.
>> Right now, we've got some custom agents
also assigned here. And I've created an
S sur follow-up agent.
>> Oh, awesome.
>> Right. And what this means is I've told
it in my custom instructions here is
just address what's been filed. don't
try and fix some CSS problem which
probably might exist but just focus on
what's been reported here and I'm going
to go ahead and assign this off to
co-pilot. It's going to go and do its
work which is great. It's exactly what I
wanted to do and then see if you can go
and find the problems. Now I'm going to
go and look at one that we have done in
the past when we were setting this all
up and it's going to create uh our pull
request. So we'll go look at one of
those and let's go ahead and look at
this one here.
And you'll see it did find some issues
here. Look, we've got this right here.
This is this is the coolest thing. All
right. So,
>> we didn't want to we want to go forward.
>> Oops.
>> We've got a little zoom issues here.
Sorry about that, guys.
>> Let's go back here. So, what we've got
here is
it said we have our min and max
replicas. It's set to three and fifth.
So, we can scale. If you've used my OC
our octopus app, sometimes when we click
on it, it says it's having a hard time
loading. So, we had some scaling issues.
This was previously unset. It was only
going to one, right? So, it fixed that.
And it also found some code fixes here.
It was allocating some memory problems,
which is really cool.
>> Now, the cooler part is we go into view
session. We can actually see what
Copilot was actually doing here. It's
going to load the actual work. You'll
see here it made those changes in our
configuration. It's going to do that as
a proposed. We look at the actual
details here. Scroll to the top.
Magical. It's using our Azure MCP server
that we've got configured. It's also
using playright. It can do before and
after snapshots to make sure that we're
good. And it's also using the GitHub MC
MCP server to make sure they can kind of
back channel the reports back into our
repository.
>> So, it's both fixing the problem. It's
addressing it in config as code. It's
testing the solution using Playright as
well. and then kind of giving you the
the full fix,
>> right? All in a single place using all
the same tools that we've been using
both front end and back end.
>> Super awesome, Shane. Thank you.
>> Thanks.
>> All right.
So, the Azure S agent really automates
incident management and resource
optimization and it reduces the pager
alerts and it proactively diagnoses and
mitigates the issues that you might have
in your code. It helps you automate
deployments and roll backs and it
continuously monitors reh resource
health and it is an incredibly valuable
tool to help you drive platform
engineering across your entire team.
Now in 30 minutes we've used natural
language specs in MCP to boost
productivity at the start. We
incorporated multiple coordinating AI
agents to handle complex tasks within
our apps. We monitored, we traced, we
guarded those AI systems in the wild and
we leveraged AI to actually help manage
our own app in production. This is this
is really kind of the rise of AI agents
and it's driving a significant shift in
the developer mindset and workflow.
Traditionally, software development
focused on writing code and managing
infrastructure. Now the emphasis is on
aligning your human intent with these
autonomous systems and developers are
transitioning from prompt engineering to
specdriven development where you have
clear testable specifications that guide
the agent behavior. In this world, the
most valuable developers are those who
can articulate the intent really
effectively and they can enable the
agents to execute those tasks with
precision. as the agents take over all
of these routine coding tasks that are
mundane and tedious. They're not the
kind of work that you enjoy doing
anyways, um you can actually shift your
focus to think about more of the higher
level problem solving, the strategy, uh
the the fun stuff, the fun aspects of
your job. And that's what's really going
to reshape the role of developers, uh
making you all orchestrators of these
intelligent systems instead of mere
coders. This is not the future what we
just showed you. This is all real. This
is today. This is how we code now. So
check it all out. Go back, deploy your
new agents, and get back to the joy of
building. And if you want to learn more,
here are a few more sessions that we'll
we'll have at Universe over the next
couple days. Thank you.
[Music]
In this demo-driven session from GitHub Universe 2025, see how VS Code and the AI Toolkit turn GitHub Copilot into your AI workbench—where chat, tools, and AI agents adapt to your flow. We’ll use Spec Kit to go from spec to working code, then build multi‑agent workflows with the open‑source Agent Framework and deploy to Azure AI Foundry with observability, tracing, and safety built in. From first commit to production, you'll learn patterns to ship secure, scalable intelligent apps and multi‑agent workflows. Speakers: Amanda Silver, CVP, Apps & Agents + 1ES GM, Microsoft (Speaker) Rong Lu, Principal Manager, Microsoft (Speaker) Shayne Boyer, Principal Program Manager, Microsoft (Speaker) — CHAPTERS — 00:00 Welcome: Building an intelligent app from scratch 02:27 Demo: Building an app with spec-driven development using Spec Kit 09:44 Adding intelligence: How to build and automate AI agents 10:44 Demo: Building and orchestrating agents with the AI toolkit 20:09 Solving operations: Managing live site incidents with AI 22:00 Demo: The Azure SRE agent automates incident response 29:15 Conclusion: The shift to developers as orchestrators #AI #GitHubUniverse #GitHub Watch more videos from GitHub Universe 2025: https://www.youtube.com/watch?v=P6Va0_KILi4&list=PL0lo9MOBetEFKNlPHNouEmVeYeyoyGTXC Stay up-to-date on all things GitHub by subscribing and following us at: YouTube: http://bit.ly/subgithub Blog: https://github.blog X: https://twitter.com/github LinkedIn: https://linkedin.com/company/github Instagram: https://www.instagram.com/github TikTok: https://www.tiktok.com/@github Facebook: https://www.facebook.com/GitHub/ About GitHub: It’s where over 100 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com