Loading video player...
Ever wonder why Kubernetes has a command
called get all that doesn't actually
retrieve all your resources? Try it
yourself and you will find that it
conveniently forgets about ingresses
persistent volume claims and potentially
many other resource types. Worse yet
even when you manually list everything
in an Aspace, if you ever manage to do
that, you're still left staring at a
pile of objects with no idea how they
fit together. There's no built-in way to
say, "Hey, these five resources form a
complete system or a group or something
like that, something logical or to check
if that system is healthy." Turns out
this is a real problem when navigating
clusters and trying to understand what's
actually running. So I created a custom
resource definition that wraps related
resources into logical groups with
status context and relationships. In
this video I will walk you through the
problem, explore how Kubernetes
ownership and owner references work and
demonstrate a better approach using
CRDs.
Here's a fundamental question that trips
up many of us working with Kubernetes.
What exactly is an application? Hm. Or a
database or any logical system running
in your cluster. Is it just a collection
of resources? If so, which ones belong
together? And how do you know? Let's
start by looking at what most people try
first. Look at it. This does not show us
an app at all. Looking at this output
are we seeing two ops? Three. There's
quadrant which might or might not be
related to the MCP service. There's
silly demo with its deployment and
database or maybe silly demo is the
database. It's impossible to tell from
this output alone. What we are actually
seeing here are individual resources
that form one or more logical groups. It
could be a single up, a database, an app
with its database or multiple
independent systems. The cube control
get all command gives us no way no way
to distinguish between those
possibilities. And here's another
problem. These aren't even all the
resources in that name space. The cube
control get all command is misleading.
Not to say ridiculous. It doesn't
retrieve all resources. Just some some
of Kubernetes core resources. Not even
all core resources. Kubernetes cannot
retrieve all resources in a single
command. It cannot do that. Yet we need
to know which ones are actually
involved. So are those all the
resources? I don't know. I just made a
wild guess that there might be ingresses
and persistent volume claims. There
could be I don't know more resource
types that I'm not thinking of. So here
are the fundamental questions. How do
resources relate to each other? Which
resources form a logical group? What's
the status of a logical group? And
what's the purpose of a group of
resources? So let's take a closer look
at one of those resources to see if we
can find any clues any. And here's what
that looks like. With the deployment, we
might know it creates a replica set
which then creates pots. We know this
from experience, but we cannot always
rely on knowing what each resource does
and which resources it creates and owns.
How about for example CMPG cluster or
cloud native posgress? I know it exists
in this namespace because I deployed it
but someone else looking at this cluster
wouldn't know it's there. Even I might
forget about it in a few weeks. So
here's what we see. And then the
question is, hey, so what this cluster
resources really is, is it just this one
object related to posgressq or is there
more? How can we know? How can we know?
I don't know. It might not be a resource
you worked with before and even if you
have the chances are you don't know
which resources it created if any. Now I
happen to know it created a pod among
other things. So let's take a look at
that. Now here's where things get
interesting. The key is in the owner
references array. This tells us that
some other resource owns this pod.
specifically a controller related to
that resource created it. In this case
the owner is the CMPG cluster resource
named silly demobod. Now, let me explain
how this works because owner references
are fundamental to understanding
Kubernetes resource relationships.
So, here it goes. When you create a
resource in Kubernetes, you are often
not just creating that one object. Many
resources are actually controllers that
watch for their corresponding custom
resources and then create and manage
other resources in response. The CMPG
cluster is a perfect example. When you
create a CMPG cluster object, the CMPG
operator sees it and starts creating
pods, services, secrets, and other
resources needed to run posgressql. The
controller establishes ownership by
adding an owner references entry to each
resource it creates. This creates a
parent child relationship. The CMPG
cluster is the parent and the pod is the
child. Now there are a few important
fields in the owner reference. The
controller field set to true means this
owner is responsible for managing the
resource. The block owner deletion field
set to true means you cannot delete the
owner while this child resource still
exists. This prevents you from
accidentally destroying a CMPG cluster
while pods are still running. More
importantly, owner references enable
garbage collection. When you delete the
CMPG cluster resource, Kubernetes
automatically deletes all the resources
it owns. You don't have to manually
clean up each pod, secret, service, or
other resources. The ownership chain
takes care of it all. This, on the other
hand, creates hierarchies of resources.
A deployment owns replica sets. Those
replica sets own pods. A CMPG cluster
owns pods, services, secrets, and
potentially other resources. Its
resources all the way down. And here's
the problem, though. We discovered the
owner by looking at the child resource
the pot. We can see ownership bottom up
from child to parent. But we cannot see
it top down from parent to children.
When we looked at the CMPJ cluster
resource earlier, there was no field
listing all the resources it created. We
have to know what to look for, then
search for it, then check if it has an
owner reference pointing back. Luckily
for us, there's cube control tree, an
amazing little tool that relies on owner
references. It can show us the hierarchy
of owned resources, but it only helps
only if we know what we're looking for
and what constitutes a logical group
which we often don't. Now, let's try it
with the CMPG cluster we just
discovered. Look at that. Cube control 3
shows us everything the CMPG cluster
owns through the owner references chain.
There's the pod, multiple services with
their endpoint slices, secrets for
certificates and authentication, arbback
resources, even a pod disruption budget.
And now we can see the complete
hierarchy starting from this one cluster
resource. Now that's great. That's
absolutely great for this specific
example since the CMPG cluster is the
top level resource. But what if that
cluster is part of an application? What
if there's more to the system as there
often is? Cube control 3 shows us the
ownership kierarchy but only and
exclusively starting from the resource
we specify. Now let's try it with the
deployment. There we go. That still
doesn't show us that there's a service
an ingress, a persistent volume claim
and probably a few other resources as
well. The deployment doesn't own those
resources. So, cube control 3 doesn't
show them.
So, let me summarize the problem we're
facing. First, there is no easy way to
answer questions like, hey, what's this
app? What does it consist of? We could
consult GID if everything is organized
well, but uh let's be honest, that's
often a mess. All I want is to look at
my cluster and figure out what is what
and how resources relate to each other.
Second, we don't have a decent way to
find out the status of a logical group.
I cannot ask, hey, what is the status of
the seldom app? I would first need to
figure out what it consists of, then
check the deployment, the ingress, the
service, the CMPG cluster, and so on and
so forth. As we already established, we
cannot do that easily, and that's
up. Third, by looking at individual
resources, it's hard to figure out
what's it all about, what is the
context, why does this thing exist, what
did the author want to accomplish? Now
to be fair, we can address some of those
issues if the author added labels and
annotations, but that often doesn't
happen. And when it does, people tend to
use different labels for conceptually
the same stuff. All in all, unless I'm
the author of this mess and I have very
very good memory, I might easily get
lost. The same can be said for AI. Both
humans and AI need to know what the
solution consists of, what the relations
between resources are, what the context
is, and what the intent was. Now, I run
into this exact problem when working on
a project. I needed a way to define
logical groups of resources with their
relationships, context, and status. So
I built a simple, one could even say
silly, solution that helps with all of
this. So let me show you what I mean.
Look at that. This time cube control 3
returned everything related to the app.
There's a cluster with a bunch of
resources related to the posgrusql
database. There's a deployment with the
app itself. There's an ingress for
public access, a persistent volume claim
for storage and a service. Most of those
own other resources which you can see in
the tree. The important part is that I
can finally finally see all the
resources related to a specific solution
in one place. And here's how that
solution resource looks like. It is a
custom resource called solution that
contains all the information we need.
There's context and intent that gives us
insight into what the author wanted to
accomplish. There's the list of top
level resources the solution manages.
Those might spawn other resources. And
as we already saw with cube control 3
once we know what the top resources are
we can easily explore the full
kierarchy. Finally, we can see the
status of the complete solution. Now
this isn't some grand multigenerational
project. It's a simple solution to a
problem I had and you might have as
well. Install the controller, create a
solution, custom resource, and you're
done.
Check out controller, use it, fork it
start it, open issues, request new
features. I would love to hear if this
helps solve your problems, too. Thank
you for watching. See you in the next
one. Cheers.
Ever wondered why `kubectl get all` doesn't actually get all your resources? It conveniently ignores Ingresses, PersistentVolumeClaims, and many other resource types. Even worse, when you do list everything in a namespace, you're left staring at a pile of disconnected objects with no way to understand how they relate to each other or form complete systems. This video dives into this fundamental Kubernetes problem and explores how ownerReferences and resource hierarchies actually work under the hood. To solve this challenge, I built a custom Solution CRD that wraps related resources into logical groups with clear context, intent, and aggregated status. Instead of manually piecing together which Deployments, Services, Ingresses, and databases belong to the same application, you can now define and query complete solutions as first-class citizens in your cluster. I'll walk you through the problem, demonstrate tools like kubectl-tree for exploring ownership hierarchies, and show you how this simple CRD approach finally answers questions like "What is this app?" and "Is my entire system healthy?" Check out the project at github.com/vfarcic/dot-ai-controller if you want to try it yourself. #Kubernetes #CRD #DevOps Consider joining the channel: https://www.youtube.com/c/devopstoolkit/join ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬ ➡ Transcript and commands: https://devopstoolkit.live/kubernetes/stop-trusting-kubectl-get-all-heres-what-it-hides-from-you 🔗 DevOps AI Toolkit Controller: https://github.com/vfarcic/dot-ai-controller ▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please visit https://devopstoolkit.live/sponsor for more information. Alternatively, feel free to contact me over Twitter or LinkedIn (see below). ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/ ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Kubernetes Resource Relations 01:13 What Is This Thing in Kubernetes? 04:50 Kubernetes ownerReferences and Garbage Collection 08:30 Solving Resource Grouping with CRDs