Loading video player...
Here's
[Music]
the harsh reality. Your AI agent is
completely useless for infrastructure
management. And you probably don't even
realize it yet. You probably tried
throwing chat GPT or cloud at your
DevOps problems thinking AI will
magically solve your infrastructure
challenges. Maybe you got some genetic
responses that looked helpful on the
surface, but when you actually tried to
implement those suggestions, you
discover the painful truth. The AI has
no clue about your environment, your
standards, and your constraints. Most
organizations are making the same
critical mistake. They're treating AI
like a search engine instead of building
it into their platform properly. They
ask vague questions, get generic
answers, and wonder why their AI
transformation isn't working. But here's
what changes everything. When you build
AI into your internal developer platform
the right way with proper context
management, organizational learning, and
intelligent workflows, you get something
completely different. You get an AI
system that actually understands your
infrastructure, follows your patterns
enforces your policy, and delivers
solutions that work in your specific
environment. In this video, I'm going to
show you the five fundamental problems
that make most AI implementations
worthless and then walk you through
building the essential components that
solve every single one of them. By the
end, you will have a complete blueprint
for an AI powered IDP that actually
works. But before we dive into those
problems, here's something you need to
understand. Building an AI powered IDP
isn't a solo project. You will be pair
programming, reviewing code, and making
critical architectural decisions that
cannot afford miscommunication.
And that's where the sponsor of this
video comes in. Most of us prefer
purpose-built tools for whichever task
we might be performing, and paired
programming should be no exception. You
wouldn't use MSWord to write code, so
why use MS Teams or Zoom to collaborate
on it? Generic screen sharing tools are
fine for meetings, but they weren't
built for bare programming. UA elements
crowd out your ID. Dialogue boxes and
nested menus disrupt your flow. They
burn CPU cycles you would otherwise use
for compilation and testing. Tupil is
purpose-built for developers. One click
to call. No persistent UI elements
blocking your code. Native on each
platform with C++ engine that minimizes
CPU load. Share control by default with
intuitive annotations. So you don't have
to dictate code or do the annoying dance
of, hey, update that method. No, no, no
not that one. Line 37. No, no, no. Over
there. Try Tupal for free at
tupal.appup/doot up/doot
and use code DOT 2025 for 90% off your
first two months. Big thanks to Tupal
for sponsoring this video. And now let's
get back to those five fundamental
problems.
Today we are exploring the essential
components you need to build when
integrating AI into Internet developer
platforms. We are not just talking about
throwing CH GP at your infrastructure
problems and hoping for the best. We are
talking about creating a proper AI
powered system that actually understands
your organization, follows your
patterns, and enforces your policies.
So, let me start with a simple question.
What happens when you give an AI agent a
vague request without proper context or
guidance? Hm. Well, I will show you with
what seems like an obvious example that
actually demonstrates everything that's
wrong with how most people approach AI
in DevOps. Watch what happens when I
give the AI agent this seemingly
straightforward request. Look at this
response. On the surface, it seems
helpful, right? The AI gave me two
options for creating a PostgresQL
database in AWS. But here's where it
gets interesting. This seemingly helpful
response actually demonstrates five
critical flaws that make most AI
implementations completely useless in
production environments. Let me break
down exactly what went wrong here.
First, notice what the AI did not do. It
did not ask me for more information
about my specific requirements, my
organizational standards, or my
environment constraints. There's no
workflow guiding this interaction
towards the right solution. Here's the
second major issue. The AI has no idea
which services are actually available in
my environment. Sure, it mentioned RDS
and EC2, but what if I'm running
everything on Kubernetes? What if I have
custom operators or crossplane providers
already deployed? The AI should be using
the Kubernetes API to discover what is
available. But here's the catch. You
cannot do semantic search against the
Kubernetes API directly. If you want an
AI to intelligently find the right
resources for a given intent, you need
to convert those Kubernetes API
definitions and CRDs into embeddings and
store them in a vector database. I will
call those capabilities. By the way
please watch that video over there if
you would like more info about vector
databases since I will not cover them
here in this. The third major issue
this AI doesn't know anything about the
patterns my organization uses. It has no
clue about our naming conventions, our
preferred architectures, our deployment
strategies, or any of the tribal
knowledge that lives in our
documentation or wiki pages or code
repositories or Slack organizations or
anywhere else. If you want AI to be
truly useful in your organization, you
need to capture those patterns from
wherever they live. Convert them into
embeddings and store them in a vector
database. I will call those patterns.
The fourth issue, the AI is completely
oblivious to my company's policies. It
does not know that we require all
databases to run in specific regions for
compliance, that we mandate resource
limits on all containers or that we
prohibit the use of latest tags in
production. We need to handle policies
the same way as patterns. Capture them
convert them to embeddings, and store
them in a vector database. But here's
the key difference. You also want to
convert those policies into enforcable
rules using Kubernetes validating
admission policies, Cavverno or similar
tools as your last line of defense. I
will call those policies. And here's the
fifth issue. Context quickly becomes
garbage. Especially when you're working
with massive amounts of data. If you
keep accumulating everything in the
conversation context, you'll end up with
a bloated mess that the AI cannot
effectively process. You need to manage
context properly by keeping only the
things that actually matter for the
specific task at hand. So what do we
actually need to solve those problems?
Three fundamental components. Proper
context management, workflows, and
learning. Now, let me be clear about
what proper context management means.
It's not the constant accumulation of
everything that's ever been said or
done. That's a recipe for disaster.
Instead, it means starting with a fresh
context for every interaction, but
populating that context with all the
relevant data that specific task needs
and nothing more. Workflows should guide
both people and AI towards the right
solution instead of relying on
incomplete user intents and the AI some
predictable decision-making process. You
cannot just throw a vague request at an
AI and expect it to magically understand
what you really want. Then learning is
how you teach AI about your
organizational data patterns, policies
best practices, and everything else that
makes your environment unique. But
here's the catch. AI models cannot, and
I repeat, cannot learn in the
traditional sense. Everything you teach
it gets lost when the conversation ends.
AI is like a person with severe
short-term memory loss or like a
goldfish that forgets everything after a
few seconds. So teaching it everything
up front is a complete waste of time.
Instead, you should teach it only the
parts that are relevant to specific
situations based on the user's intent
the current workflow step, and other
contextual factors. Think of it as
temporary just in time learning. What
we're covering here is really the
combination of the subjects we explored
in previous videos. This is where we're
putting quite a few hard learned lessons
together into a cohesive system. Just to
be clear about the scope, this video is
focused on creation and initial setup.
We will cover updates and observability
in a different video. So, we'll explore
those concepts using the DevOps AI
toolkit project. Now, I'm not trying to
sell you on this specific project. Think
of it as a reference implementation, a
set of components that demonstrate what
you might need to build in your own
internal developer platform. All in all
we will explore three types of learning
that are crucial for IDPS, capabilities
patterns, and policies. We'll also dive
into context management and workflows.
And when we combine all those components
properly, we'll get a complete AI
powered internal developer platform that
actually works. So let's start with the
first piece of the puzzle.
So what exactly are capabilities?
Let me explain this concept because it's
absolutely fundamental to building a
that actually understands your
infrastructure. So here's the thing. The
capabilities we need are already there.
The Kubernetes API acts as a single
unified control plane that can manage
resources not just inside the cluster
itself but also external resources in
AWS Google Cloud, Azure, and pretty much
anywhere else you can think of. This is
crucial for two reasons. First, it gives
AI a single API to work with instead of
having to learn dozens of different
cloud provider APIs, tools, and
interfaces. Instead of the AI needed to
understand AWS CLI, Azure CLI, Google
CLI, Terapform, Bloomi, and who knows
what else, it just needs to understand
one thing, the Kubernetes API. Second
and this is equally important, by
controlling which API endpoints and
resource types are available in your
Kubernetes cluster, you're defining the
scope of what can and should be done.
You're not giving AI access to
everything under the sun. You're
curating a specific set of capabilities
that align with your organization's
standards and policies. But here's where
we hit the problem. The AI agent cannot
figure out which resource definitions
might match a user's intent. What's it
supposed to do? Go through every single
resource definition in your cluster
every time someone asks for something.
That would be insane.
There are potentially hundreds or
thousands of custom resources and there
is no semantic search capability in the
Kubernetes API. So here's the solution.
If you convert the relevant information
from the Kubernetes API into embeddings
and store them in a vector database
then the AI can perform sematic search
and actually find what it's looking for.
Instead of blindly iterating through
every resource definition, it can
intelligently search for resources that
match the intent. We'll dive deeper into
the semantic search mechanics later. For
now, let's take a look at some of the
data that's already in the database and
see how we can create embeddings and
push data into the system. Let me start
by listing the capabilities that are
already available in this system. As you
can see, we got 344 capabilities stored
in the database. Each one represents a
Kubernetes resource type with its
associated metadata, what it can do
which providers it works with, and a
description of its functionality. This
is exactly the kind of information an AI
needs to match user intents with the
right infrastructure component. Now, let
me show you a specific example by
looking at a database related
capability. Look at that. Perfect. This
shows exactly what AI gets when it
searches for database related
capabilities. It defines not just the
resource name but also the semantic
tags, supported providers, complexity
level, and a detailed description. This
is what enables the AI to make
intelligent recommendations instead of
just throwing random Kubernetes
resources at you. Now, let me show you
how we can add new capabilities to the
system. This scanning process is what
discovers all the custom resource
definitions in your cluster, analyzes
their schemas, and converts them into
embeddings and stores them in the vector
database. is the foundation that makes
intelligent capability discovery
possible. Please watch that video again
uh if you would like more details about
scanning Kubernetes resources and
converting them to embeddings because I
already did it. I don't want to do it
again. So that's capabilities. Teaching
AI what infrastructure resources are
available but knowing what's available
is only part of the equation. The AI
also needs to understand how your
organization actually uses those
resources.
Now let's talk about patterns. Here's
something important to understand. AI is
already perfectly capable of using
Kubernetes, assembling solutions in AWS
GCP, Azure, and handling other tasks
that are public knowledge. We do not
need to teach it those things. We don't.
What we need to teach are the things
that are specific to our company. the
patterns that represent how we do
things, our standards, our preferred
approaches, and our organizational
wisdom that isn't documented anywhere in
the public internet. AI already knows
how to assemble resources based on an
intent. That's public knowledge. But
patterns teach it how to assemble
resources according to your organization
specific knowhow. Maybe your company
always pairs databases with specific
monitoring setups or has a standard way
of handling ingress with particular
security configurations.
Those organizational assembly patterns
are what we need to capture. So where do
those organizational patterns live? They
can be scattered across existing code
repositories, documentation, wiki pages
slug conversations or anywhere else you
store institutional knowledge. But
here's the problem. A lot of those
patterns aren't written down anywhere.
They exist in people's heads in the
collective experience of your team
members who know and I quote how we do
things around here. So what's the AI
supposed to do with all those scattern
patterns? Should you go through every
single document, every Slack
conversation, every code repository
every every every time someone asks for
something that's not practical or
efficient? The solution is similar to
what we did with capabilities. We need
to identify the actual patterns first.
And let me be clear, not everything is a
pattern worth capturing. Then we create
embeddings from those patterns and store
them in a vector database. The logic is
exactly the same as with capabilities
but the sources of data are different.
Once the patterns are stored in a vector
database, AI agents can perform sematic
search to find the patterns that match
specific intents. Now, instead of
randomly guessing how to implement
something, the AI can follow your
organization's established patterns and
best practices. We will explore how to
use those patterns in AI workflows
later. For now, let's focus on how to
create and manage them. I will show you
how to capture patterns from people's
knowledge, from people's heads, the
stuff that it's already there, material.
The same logic applies when you're
extracting patterns from other sources
like documentation or code. So let me
start by showing you what patterns we
already have in the system. As you can
see, we currently have just two patterns
in the system. Each pattern has
triggers. Those are the keywords that
help the AI understand when to apply
this pattern. The patterns also track
which resources they recommend and who
created them. Now, let me show you how
to create a new pattern. This is where
we capture organizational knowledge and
turn it into something the AI can use. I
will specify AWS public services as the
capability. I will provide the same
answer to specify the infrastructure
types. I will keep all the suggested
triggers. I will specify an internet
gateway resource as the pattern
recommendation. I will ask the A to
generate the rational for me. I will
specify a team as the creator
and the pattern looks good. So I will
confirm it. And that's how you capture
organizational patterns from people's
knowledge. What we just saw was the
process for extracting patterns that
exist in people's heads. The tribal
knowledge and experience that isn't
written down anywhere. As I said before
the same process works for patterns
stored in documentation Slack
conversation or any other source. The
only difference is that you would need
an additional step at the beginning to
extract and identify the patterns from
those sources before you can define them
in the system. But patterns are just one
piece of the puzzle. The AI also needs
to understand what it's not allowed to
do.
Now let's talk about policies. While
patterns teach AI how to assemble
resources according to your
organizational knowh how policies are
about what values are allowed or
required in the fields of those
resources. For example, policies define
constraints like hey all databases must
run in the US east one region container
images cannot use the latest tag or all
pots must have resource limits defined.
Those are field level constraints that
ensure compliance and security. And
here's the key insight. Solutions like a
verno or PA or Kubernetes validating
admission policies can enforce those
policies. They can, but they don't teach
AI or people how to do the right thing
from the start. Without policy learning
you end up with a trial and error
approach where you keep hitting
enforcement barriers until all the
checks finally pass. That's brute force
attack. That's inefficient. That's
frustrating. What we're building here
teaches the AI the policies up front so
it can create compliance resources from
the beginning instead of learning
through rejection. Now the process for
handling policies is mostly the same as
with buttons. You identify policies from
various sources, create embeddings and
store them in a database. But here's the
key difference. We can also convert
those policies into enforcable rules
using Kubernetes validating admission
policies or Kerno or OPA or whatever or
whichever policy implementation you are
using. This gives you both proactive
guidance for the AI and reactive
enforcement in the cluster. This all
that creates a powerful two-layer
system. The AI can use data in the
vector database to learn which policies
applied to a specific intent and create
compliant resources from the start.
While cover and similar implementations
serve as the last line of defense, just
as they always have best of both words
proactive compliance and enforcement
backup. We'll explore how to use those
policies in workflows later. For now
let's focus on how to create and manage
them. I'll demonstrate capturing
policies from people's knowledge, the
compliance requirements and constraints
that exist in their heads. And the same
approach works when extracting policies
from other sources like compliance
documents or existing policy
configurations. So let me start by
showing you what policies we already
have in the system. Look at it. Perfect.
These are great examples of policy
constraints. The first policy enforces
that Azure databases must run in US East
one region. And the second prevents the
use of latest tax in uh in applications.
And now notice how each policy has
triggers, rational and importantly
deployed policy reference. That means
that those policies have been converted
into actual cover enforcement rules.
Okay. So now let me show you how to
create a new policy. I'll create a
policy for AWS database region
compliance. I will specify AWS databases
as the target. I will keep all the
expanded triggers. I will ask the AI to
generate the rational. I will specify a
team as the creator and I will choose
option two and specify specific name
spaces. Okay. So let me take a look at
the complete governor policy that was
generated. Look at that. Look at that.
The system generated the complete coven
policy with rules for multiple services
and database types. Even if something
was wrong and we only got 80% of the
policy correct, that would still be
incredibly helpful. We could save it
update it ourselves or with another
agent and then apply it. For this demo
I will apply the policy to the cluster
directly. And that's the power of policy
learning. We captured organizational
compliance requirements from people's
knowledge, stored them in a searchable
format for AI guidance and automatically
generated enforcable cyberno policies
for cluster level enforcement. Now the
AI knows to create databases in the US
East1 region from the start and if
something slips through, Cabverno will
catch it. The same process works for
extracting policies from compliance
documents, existing policy
configurations, or any other source
where your organizational constraints
are documented. So, we covered
capabilities, patterns, and policies.
But there's still one crucial piece
missing, managing all this information
efficiently.
Here's the critical problem that can
make or break your bay powered
infrastructure. Context management. What
exactly is context and why does it
matter so much? We are dealing with
massive amounts of data, hundreds of
Kubernetes resources, each with
potentially enormous schemas, plus all
our patterns, policies, user intents
and everything else. If you keep piling
all of this into your AI's context, it
quickly becomes garbage. And that
garbage gets compacted into even bigger
garbage until the whole system becomes
completely useless. This is a
fundamental problem with how most people
approach AI in infrastructure. They dump
everything into the context and wonder
why performance degrades, costs
skyrocket, and responses become
increasingly inaccurate. But wait until
you see the solution that eliminates
this problem really. And here it is.
Instead of building on top of previous
context, each interaction in this MCP
system starts with a completely fresh
context. The agent inside the MCP gets
exactly the relevant information it
needs for the specific task at hand. No
matter when that information was
originally fetched or created. No
accumulated garbage, no bloated context
windows, no degraded performance, just
clean, relevant data for each
interaction. That's what you need. And
here's a crucial optimization. Use code
not agents, to fetch information in
predictable situations. When you know
exactly what data you need and where to
get it, don't waste time and money
asking an AI to fetch it. Direct code
execution is faster, less expensive
more reliable, and completely
deterministic. Now, let me show you what
this looks like in practice. Here's an
actual prompt template used in the
system. This is the context for a single
step in the workflow. Think about it. We
might have accumulated thousands, tens
of thousands, or even hundreds of
thousands of tokens in previous
interactions, but all of that is gone
piped clean. Instead, this template has
placeholders that get replaced with only
the relevant data needed for this
specific step. Intent becomes the
enhanced user intent. Resources get
populated with the specific list of
Kubernetes resources and schemas that
might be involved in assembling a
solution and patterns get replaced with
the relevant patterns found during
semantic search in the vector database.
Not all patterns, just the ones that
matter for this particular request. The
prompt outputs clean JSON that can be
used along with other data in subsequent
workflow steps or anywhere else.
Whatever else the air might have
generated gets discarded immediately. No
bloat, no accumulation, no context
pollution. This approach keeps the
system fast, cost effective and accurate
throughout even the most complex
multi-step workflows.
Now let's talk about workflows. What
exactly are workflows in the context of
AI powered infrastructure management? H
workflows are structured
semi-predictable sequences of steps
designed to accomplish something
complex. They break down big tasks into
manageable pieces, guide users through
decision points, and ensure all
necessary information is gathered before
taking action. In our example, a
workflow is a combination of fetching
information from various sources and
analyzing that information with AI. Each
step in the workflow can involve data
gathering, AI analysis, or both. We
might fetch information from users to
enhance their intent, pull relevant
patterns and policies from the vector
database, get current schemas from
Kubernetes clusters, gather operational
data, and source information from
anywhere else that's relevant. Then AI
analyzes all this collected data to make
intelligent decisions about the next
step. We have already seen workflows in
action when we were managing
capabilities, patterns and policies. But
this next workflow is more important
because it guides users towards the
right solution while leveraging all the
capabilities, patterns and policies we
create. Now, here's where everything
comes together. Watch what happens when
I make the same poss request from the
beginning, but this time with all the
components working together. Did you
notice what just happened? Instead of
immediately suggesting a random possl
solution, the workflow intelligently
recognized that my vague request needed
clarification. It asked targeted
questions to understand my specific
requirements, deployment preferences and
constraints. This is the workflow
guiding the conversation towards a
better outcome. The output from the MCP
server contains among other things
information about what should be the
next step in the workflow. And the next
step is which MCP tool it should call
next. So now I will provide the
clarification it requested. Look at it.
This is brilliant. The AI analyzed my
refined requirements and presented me
with multiple solutions. Notice how it
found organizational patterns and rank
them higher. I can choose a golden path
in the form of Devos toolkit manage
posgressql or go with a custom solution.
The AI gives me options but it's up to
me to choose which one fits my needs.
Now I will select the top ranked
organizational pattern. Perfect. Notice
how the workflow is already enforcing
the policy we created earlier. It's
telling me that databases in AWS should
run in US East one region. That's our
policy in action guiding the user me in
this case toward compliant choices. Next
I will specify basic configuration
options. Here's an interesting
observation. The question about which
crossplane composition to use is an
example of a potentially missing policy.
We should probably create one that
instructs the system which compositions
to use for AWS, which one for Google and
so on and so forth. But that's a task
for another day. For now, let me ask the
AI to generate a simple not simple
sample schema. What else? Okay, now I
will skip the advanced configuration and
I don't have additional requirements and
this is the culmination of everything we
built. The Aentic MCP has assembled the
solution by combining the workflow
guided user interaction, capabilities
available Kubernetes resources
patterns, organizational best practices
and policies, compliance requirements
like the US East one region rule. It did
the right thing because it got all the
information it needed through the
structured workflow. Of course, it might
not have done the right thing if we had
missed providing sufficient patterns and
policies. The quality of AI's decisions
directly and repeat directly depends on
the organizational knowledge we fed into
the system. And now I could save those
manifests for GitHubs, review them
first, or do something else entirely
with the assembled solution. It's up to
me to choose whether to let the Aentic
MCP deploy directly or handle the
deployment through my organization's
preferred process. For this demo, I'll
let it deploy directly.
We built something that most
organizations only dream about, an AI
system that actually understands your
infrastructure and works within your
constraints. Here are the five essential
components that make this possible.
Capabilities teach your AI what
infrastructure resources are actually
available in your environment. No more
generic suggestions that don't match
your setup. Patterns encode your
organizational wisdom. The tribal
knowledge that transforms AI from giving
generic solutions to following your
specific standards. Policies ensure
compliance from the start instead of
learning through rejection. Your AI
creates compliant resources immediately.
Context management keeps your system
fast and accurate by starting each
interaction with a fresh context instead
of accumulating garbage. Workflows guide
intelligent conversations toward the
right solutions instead of relying on
vague requests and impredictable
responses. And the result, AI that
deploys infrastructure correctly the
first time, follows your patterns
respects your policies, and actually
works in your organization. Most of your
work will be in creating patterns and
policies. The technical infrastructure
is straightforward, but capturing your
organizational knowledge and compliance
requirements. Ah, that's where you will
spend most of your time. As a side note
if you choose to experiment with the
DevOps toolkit or build your own system
remember that this is an interactive
process. Generate recommendations
inspect them carefully, identify gaps in
your patterns and policies and what's
missing, and repeat. We saw a perfect
example of this earlier with the
composition selector. The AI did not
know which crossplane compositions to
use for different cloud providers
because we haven't created that policy
yet and it doesn't know from the schema.
The system is only as good as the
organizational knowledge you feed into
it. But when you get it right, you will
have AI that truly understands how your
organization does infrastructure. Thank
you for watching. See you in the next
one. Cheers.
Discover why your AI agent is completely failing at infrastructure management and learn to build an AI-powered Internal Developer Platform that actually works. Most organizations are treating AI like a search engine, asking vague questions and getting generic answers that break in production. This video reveals the five critical components that transform useless AI into intelligent infrastructure automation. You'll learn to build capabilities discovery using Vector databases for semantic search across Kubernetes resources, capture organizational patterns from tribal knowledge and documentation, create enforceable policies that guide AI toward compliance, implement proper context management to avoid the bloated mess most systems become, and design intelligent workflows that guide users to the right solutions instead of relying on guesswork. Watch as we demonstrate the complete transformation from a generic AI response to a fully functional PostgreSQL deployment that follows organizational patterns, enforces compliance policies, and deploys correctly the first time. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Tuple 🔗 https://tuple.app/DOT 👉 Promo code: DOT2025 ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ #AIInfrastructure #InternalDeveloperPlatform #KubernetesAI Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/internal-developer-platforms/why-your-infrastructure-ai-sucks-and-how-to-fix-it 🔗 DevOps AI Toolkit: https://github.com/vfarcic/dot-ai 🎬 Stop Blaming AI: Vector DBs + RAG = Game Changer: https://youtu.be/zqpJr1qZhTg 🎬 Why Kubernetes Discovery Sucks for AI (And How Vector DBs Fix It): https://youtu.be/MSNstHj4rmk ▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below). ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/ ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 AI for Infrastructure Challenges 01:42 Tuple (sponsor) 03:16 Why Your AI Agent Is Useless 09:52 Kubernetes API Discovery That Actually Works 13:41 Organizational Knowledge AI Can Actually Use 17:49 Stop Breaking Production With AI 22:17 The Context Window Disaster Nobody Talks About 25:16 Smart Conversations That Get Results 29:34 Your Complete AI-Powered IDP Blueprint