Loading video player...
Hey everyone, welcome back. Michael
Forester here for CodeCloud. Today we
are diving into Kubernetes 1.34, the
latest release that actually dropped a
couple of months ago in August. And
trust me, it brings some really neat
stuff. Think of it like a fresh wind in
the sales of your cluster, which that
will become clear in a second. So, we're
going to talk about the problems this
version solves, what's new, and how you
can start taking advantage of it, all in
simple, beginner friendly terms. So,
let's set sail.
So quick release overview. So Kubernetes
1.34 brings 58 enhancements in total. So
there's 23 features that have graduated
to stable that are production ready, 22
that have entered beta and 13 that are
alpha which is still experimental. The
theme of this release is of wind and
will or for short. It's a tribute to how
the Kubernetes project keeps going even
when conditions aren't perfect. the
message. It's not always about the
perfect wind. It's about the will of the
community and the contributors who keep
the ship moving forward. So, in this
release, we've got features you can
trust now that are stable, some you can
start testing that are in beta, and
others that you're going to want to
watch for the future that are in alpha.
So, here's some key features. We're
going to go take a look at the
documentation and break down some of the
standout changes.
So, if you look, Kubernetes 1.34 of Wind
and Will is right here. And here's the
release notes. Notice this awesome
little logo here, which by the way has
bears on it. Not sure why. Number 34
right here. And it says, oh wow, of will
and wind. So with a little pause thing
at the top. So anyway, cool logo. Very
different from Octarine in the last
release.
Uh notice here the message around uh of
wind and will. Okay, so let's talk about
the first feature in stable like that's
production ready. Notice here this is
dynamic resource allocation. So DRRA is
very different from dynamic pod resizing
which is something that we talked about
in the octarine release in 1.33. Notice
this is part of 1.34. This is stable and
it is enabled by default which is true.
So a little thing to know about DRRA is
that this is going to basically allow
you to request and share resources
especially multi-device like
multicontainer like sharable resources
that are attached things like GPUs,
hardware accelerators, that kind of
thing. Modern clusters are often running
workloads that rely on this specialized
hardware. And so things like you know
GPUs, FPGAAS which are field
programmable gate arrays for all kinds
of custom hardware acceleration or even
high performance nicks they are all now
it's possible to share them and dedicate
them into pods. So Kubernetes now
handles them in a very basic way. Like
before you could only ask for a GPU and
that was it. there was no way to really
specify what type of GPU that you know
that you needed but DRRA really changes
that so for example this is now stable
production ready and what happened is
originally you'd have limited GPU
awareness like you would launch a
container and this one's trainer it's
you know the latest ML model and
basically you would say these are the
resources and I basically just need a
GPU that's it so a little taggy and it
would basically get you a system that
had a GPU
with dynamic resource allocation. This
shifts where basically you can define a
resource class and then name it
obviously and it's attached to an
official driver that is specific for
dynamic resource allocation. And so then
what happens is that much like a
persistent volume you basically create a
claim template
basically that defines like what are the
parameters like what slices of something
where the different slices what do they
look like and you basically define the
resource claim template and then what
can happen and this is by the way is
another resource claim template. So this
one they're just asking for a slice of a
multi-instance GPU. Here they're asking
for a complete GPU.
And so then what can happen is that
let's say you've got a heavy ML job that
you want to focus on and you have a
container that basically is going to be
training against that latest ML model.
You can make a claim against a GPU
claim. And what will happen is that it
will actually go against a resource
template. Right now notice up here we
had the full A100 GPU and we are
matching that here. And so it's going to
enable basically these precise hardware
requests where we could claim for
example a full GPU or we could claim
like 20 GB a slice of a GPU. And so
that's what dynamic resource allocation
enables. This is stable feature. This is
basically new in 1.34.
And this is exactly what we need to get
finer grain sharability but also control
over these FPGAAS and these GPUs and
dedicated nicks. Okay, let's talk about
the next the second of the items that
I've released in 1.34. Basically pod
level resource request and limits. Now,
those of you who have been setting
resource requests and limits, which are
typically recommended defaults for any
Kubernetes production release,
you know, a lot of this has been done at
a container level. So, before instead of
being able to actually set, you know,
for the entire pod, you've actually had
to set it at a container level, which,
you know, has obviously caused some
overestimation and some inflation. So
this is like a more intuitive and
straightforward way to manage basically
resources for multi-ontainer pods in
particular. So if we take a look at this
problem is is that every Kubernetes user
with a multicontainer pod has faced this
problem where you've got three
containers and you got to give the pod a
total of four CPU cores and 8 GB of RAM,
but you've got to do some janky math in
order to get that to work. So if we look
at this in terms of
our YAML files, notice here that what we
would have done before is that at the
container level, we would have basically
set our resources up and we would have
done it for every single container. So
notice that both of these requests are
set at the basically image at the
container level if you will. So notice
that everything's set up at that level.
With this change, what will happen is
that we can actually just set it at the
pod level. So, as long as they stay
within these constraints,
everything will be good. And notice that
up here, right at our previous one, we
had half a CPU, half a CPU, a fraction
of a CPU, and one full CPU. So,
altogether, it looks like 2.2 CPUs.
Notice here, what's happening is that
they're requesting basically two CPUs,
and then we're giving them a max limit
of three. Same with memory, same math.
So instead of now having to figure out
what the resources will be inside of
each container, inside of a set, you can
just set it for the entire pod. It's
going to make this whole thing a lot
more simple. And then the pod can just
act as one unit, which is what it was
intended to do to begin with because it
is the basic unit of deployment. That is
a beta feature enabled by default in
1.34
pod level resource management. All
right, let's talk about the next. Okay,
let's talk next about ordered namespace
deletion. So, this is the third of the
features inside 1.34. This is stable
production ready. And so, here's a
security problem that might have been
lurking in your clusters. When you
delete a namespace, the resources inside
get deleted in a semi- random order. So
what this means is that your pods might
stick around for a few seconds after
their network policies are already gone.
During those few seconds, your pods are
running without the network security
policies that were protecting them. In
fact, this was serious enough that it
got assigned its own CVE number,
basically CVE 2024 7598. So basically if
an attacker had access to your cluster
they could potentially right as you can
see it listed right here they could
potentially exploit this brief window
where pods were running without their
security policies. So Kubernetes
basically fixes this with ordered
namespace deletion. It's now graduated
to stable. So here's the CVE number that
we saw before. But basically when you
delete a namespace now Kubernetes
follows a smart security focused order.
First, it's going to remove the pods and
then stop like they're going to stop
running immediately. Secondly, it's
going to remove the services, network
policies, and other resources that
surround the pods. And then last,
finally, it's going to clean up the name
space itself. Now, this ensures that
there's a never a moment where pods are
running without their security policies,
storage is released before pods try to
use that, and everything shuts down in a
logical predictable sequence. In short,
ordered namespace deletion makes your
cluster more secure by default, fixes a
known security vulnerability, and
ensures resources are cleaned up in the
right order. It's one of those
behindthe-scenes improvements that make
Kubernetes more reliable without you
having to do anything. So that's number
three in our list.
All right, we are now talking about
number four in our list, which is
basically a dialect of YAML for
Kubernetes. So, Kamel aims to be a safer
kind of less ambiguous YAML subset and
was designed for Kubernetes because
let's face it, every Kubernetes user has
faced this YAML nightmare. It's it's
powerful. YAML is super powerful, very
readable. It's also fragile and a tiny
indentation mistake can really
completely change your configuration.
Let's take a look at a pod manifest. So,
for example,
here we've got a manifest. And what's
interesting enough is that there's
actually a mistake in here. If you look
at the spaces, there's just only one
space between name and there should be
two. Down here, you can see that these
are properly indented. So, if we look at
this one, it's missing something. And if
we look at these, well, these are
properly set up. Everyone's run into
this, right? The container field isn't
properly indented under specs. So,
Kubernetes doesn't even recognize it as
part of the podspec and your deployment
fails with this vague invalid
configuration error. Right? So, oddly
enough, Kubernetes 1.34 is introducing
into alpha basically this new
experimental feature basically a
structured version of YAML right here's
the same manifest in camel form right
camel I keep saying that but maybe
that's not the right way to say it but
basically notice that now what's
happening no intidation errors no hidden
traps basically you use brackets for
structure instead of spaces and it
quotes all strings So things like yes,
no, or 11 a.m. won't suddenly turn it to
like booleans or numbers. Right? Now, I
do want to mention that this is alpha
status. So like notice this is like
quoted and highlighted, right? Yes and
nos. Notice the whole bracket set here
along with commas for separation.
But this is alpha status, not enabled by
default. So you've got to basically set
this to true in order for it to
function. It's not recommended for
production. So do it on your test
clusters. And just know that all camel
files are valid YAML, right? So, you
know, if you enable it and then do a get
pods, you can set that as an output
format so you can see it running, right?
In short, YAML is an interesting
experiment to see if they could make
Kubernetes manifest cleaner and safer in
the future, but it's too early yet for
production use. So, watch it to mature
in upcoming releases. Okay, on to the
next one.
Okay. And last but not least on our list
is basically Kubercy or Kubric, right?
Basically, it's a configuration file
that allows you to define preferences
for cubecdo or cube cuddle. So you can
set aliases to default options and so it
allows you basically to set a
preferences file. Notice that it's in
your home directory under the coupube
directory and it's called cuberc but it
follows a standard definition format as
if it were you were defining an object
in Kubernetes. Let's go take a look at
this, right? Because this graduates the
Kubc feature to beta. So it's like your
personal configuration file for
coupubectl that's enabled by default and
you can change the path if you want to.
You just have to set the cuberc
environment variable or you can use a d-
kubernc if you want to specify it. Let's
go take a look here though. So it is
going to follow kind of the standard
kind of object definition of kubernetes.
And you can do things like add defaults
like if you want to change the delete
command so that's always interactive or
set server side equal to true. Just know
that these are actually official project
recommendations. You can also do things
like set aliases. So you could do keep
coupubectl uh GNS. And what this will do
is if you run this command, it'll
actually run all of this based on your
defaults that you've set here.
So
this basically gives you a a preference
file so that you can create anything
that you need to to customize your
coupubeCTL interaction. In short, Kubc
basically lets you make coupubectl truly
yours. You get to set defaults, add the
shortcuts, and streamline your CLI
workflows. It is beta and enabled by
default in 1.34. So it is actually ready
for you to start using today. So, I was
going to take you over to the CodeCloud
playground so that you could actually
see it in action before we close out
with other updates that may be worth
talking about.
Okay, so here we are at the public
playgrounds. Link is below, right? And
what we're going to do is we're actually
going to start a cluster of 1.34. And
this is free for anybody to start,
anybody to touch, anybody to test. So
you don't actually have to have a
membership in order to start the public
playground. So the idea is we're going
to start a 1.34 cluster and just let you
see what this environment looks like.
Here we are in our cluster and I think
what we want to do is we want to grab
our file. So we've got a full definition
file, right? This whole thing contains
our preferences. This is going to be our
coupube rc right that's basically not
kuberc
but kuberc for coupube run control right
that's old Linux term. Okay. So, first
uh let's just make sure that everything
is working and we're solid. So, it is
1.34. We can see it here. We want to see
if we got to do cube directory because
we might already be in the proper path,
which it looks like we are. So, we're
going to go into our coupube directory
and we're just going to vim basically
coup.
We're going to do a quick insert.
Believe everything is properly
formatted, but it is yaml and we were
just talking about that. And so if
everything's properly formatted, what
will happen, I'm going to clear this
out, is that if I do a couplp-
a, it should list all of the name spaces
by default in wide. Now, why will it do
that? So, if we look at at the kubarc,
basically,
you'll notice that I have wide formats
already set here. So this is what's by
default going to happen when I run my
alias for GNS or GMP.
Now, just to show the difference, if I
do a coupl and just get pods and I just
let's just do an ash a by default, it's
not going to show as much information.
If I do basically my gmp command, which
basically adds the do
the o like the d o wide, right? So now
by default, it's just going to show me
wide every single time that I ask. So I
can get for example the pod ips, you
know, whatever I need to see. And so you
can set all of your aliases here,
including things like setting name
spaces, getting contexts, uh, you know,
switching namespaces, all of that, all
right here inside your Kubarc file. Now,
again, this is in our public playground
that anybody can run in. I just wanted
to show you that before we closed up.
So, hope you enjoyed seeing all five
features and let's wrap it up. Let's
talk about some major upgrades and how
to close it out. So, there you have it.
five key features that are worth
checking out in the next release. There
are a few other major updates that are
just worth mentioning. While we focused
on kind of the headline features,
Kubernetes 1.34 includes several other
important improvements. Things such as
production ready observability. So now
both API server tracing and kublet
tracing have matured to even new
stability. They were actually released
on 1.34 for the API server and the
Kublet is now also has tracing. So you
can now use open telemetry to trace pod
life cycles end to end across all of
your Kubernetes components. So this is
enabled by default in 1.34. It's
production ready. It's perfect for
debugging performance issues and
understanding exactly where delays are
happening in your cluster. And there are
about 20 more stable graduated features.
I'm just going to rattle them off. One,
Linux swap support. So stable support
for swap memory on nodes. Two, job pod
replacement policy. So better control
over when replacement pods are created.
Three, sleep actions for your life cycle
hooks. So you can now pause containers
during startup or shutdown. Four,
structured authentication. So you've got
better authentication management for the
API server. Five, streaming list
response. So you've got more efficient
handling of large resource lists and
many more improvements to scheduling,
storage, networking, and security. You
can check out the full list in the
release notes below. And that's it for
1.34. Oh wow. So now it's time for you
to get hands-on with Kubernetes 1.34 and
experience these features for yourself.
And the best part, you don't even need
to set up a cluster. As you saw in the
demo that we just did, we've got the
latest Kubernetes 1.34 environment that
is ready for you on CodeCloud's free
Kubernetes playground. It's a real
cluster where you can experiment, break
things safely, and basically see how
features like DRA and pod level
resources and coupl preferences actually
work. You've actually already seen it.
So, I want you to visit
httpscodecloud.com
public-playgrounds.
Just head on over to our Kubernetes
labs, spin up your environment, and just
start exploring the new release right
away. No email asks, no installs, no
setup, just pure clickandgo hands-on
learning. So, go try it out and see what
of Wind and Will 134 really feels like
in action. If you enjoyed this
breakdown, make sure you to hit like and
subscribe for more DevOps updates and
drop a comment telling me the feature
that you're most excited about. Until
next time, may the winds be favorable
and your clusters stay steady. See you
next time.
🔥 Practice Kubernetes 1.34 now: https://kode.wiki/48XnePK Explore the major changes in Kubernetes 1.34 "Of Wind & Will"! In this video, We go beyond the release notes to inspect the architecture behind the new Production-Grade Distributed Tracing and what the new Alpha feature, KYAML, means for your YAML workflows. Whether you are preparing for CKA/CKS or managing production clusters, this 18-minute breakdown covers the critical "need-to-know" technical details of the "Of Wind & Will" release. 🚀Explore Our Top Courses & Special Offers: https://kode.wiki/3CzuOnc ⬇️Key Topics Covered: - Dynamic Resource Allocation (DRA) GA - Production-Grade Distributed Tracing - KYAML: The future of Kubernetes Manifests? - In-Place Pod Resizing updates 🔥 Practice Kubernetes 1.34 now: https://kode.wiki/48XnePK ⏰Timestamps: 00:00 - Introduction to K8s 1.34 01:42 - Dynamic Resource Allocation (DRA) 04:56 - Pod-level resource requests and limits 07:18 - Ordered Namespace Deletion 09:14 - KYAML 11:33 - Kubectl user preference (kuberc) 16:06 - Future Updates & Conclusion #Kubernetes #K8s #DevOps #CloudNative #KodeKloud #Kubernetes134