Loading video player...
I run a five node Kubernetes cluster in
my basement. This line on your resume
will make you stand out from 99% of
DevOps job seekers. And in this video,
I'm going to show you how to build a
Kubernetes home lab from scratch. The
first thing we're going to talk about is
hardware. I don't recommend using
virtual machines. I recommend using bare
metal devices. And the reason for this
is that virtualization
adds an extra layer of abstraction to
your learning that you might not be
interested in if your goal is to learn
Kubernetes because Kubernetes is the
tool to focus on if your goal is to land
a DevOps job. So I recommend getting as
close to the source as possible
basically and you do that by running
bare metal devices. Also, there is
something about actually seeing the
device like I have behind me here.
There's something about actually seeing
the device and if you run an application
knowing that it's actually running on
that device rather than on an
abstraction on that device. It's
something you have to experience for
yourself, but there's something really
satisfying about that. Now, you don't
need a huge rack. You don't need fancy
RGB lighting. You don't need these huge
setups that you see in other people's
YouTube videos. I am 5 years into my
career now, and this little rack is the
first rack that I've bought. So far,
I've only been using old refurbished
hardware. So, in this video, we're going
to be using two Raspberry Pies, but you
don't need to use Raspberry Pies. I'm
just doing that because I have them
lying around and I want to start using
them. But up until this point, I have
used old laptops or old refurbished
enterprise hardware for my home labbing.
You can really literally get started
with an old laptop you have lying
around. So, don't take the hardware uh
requirements as something you have to
have. There's a recent trend happening
with mini home lab racks and that really
spoke to me because I live in a small
apartment and I'm also very sensitive to
noise. So I can't really have a huge
rack with enterprisegrade rack mounted
servers in there because they are noisy
and they take up a lot of space. So I
really like this idea of doing a mini
rack. So I I bought some hardware and I
already had the pies lying around. So it
was a good match and that's why I went
for it. In this video, I'm going to be
using two Raspberry Pi 5s, one of 8 GB
and one of 4 GB. And I'm I mounted them
in a Geekpie 10-in rack with eight
units. I also got the rack mount for
Raspberry Pi 5, which holds two
Raspberry Pies. And I really like this
mount because it shows two HDMI ports
that I can just easily access from the
outside if I need to. And it just looks
really clean with the LED lights and
such. You can get the full parts list
and hardware specification by
downloading the full written home lab
guide from the description below. So,
now that we have covered hardware, let's
dig in. I'm using two SD cards. So, each
of these pies, the rack is holding two
pies, and each of them has an S 64 GB SD
card, and I just used the Raspberry Pi
imager to flash those. I used the
Raspian uh OS, so the the one that you
get with the graphical environment. And
in terms of settings, all I've done is
set a password and give them a public
SSH key. I haven't done any other
settings. Uh because you you you get
that when you do the imager, you get
some settings that you can do and apply
them right away into the Pi. All I did
was add my user password and a public
SSH key. Nothing else. So these devices
are now on my network with the following
IP addresses 10.39 and 10.60. 60 and I
gave them an alias in my Etsy hosts
file. So what I can do now is SSH
Misha at pi1.
Then this is going to prompt me for my
SSH key. And now I am sshed into it. So
my PI1
has um free sh the the PI1 has 7.9 GB of
RAM. And if I split this, let's do a
horizontal split. SSH Misha at Pi 2.
Yes.
And
now I'm SSH into the second PI. And this
one has 4 GB of RAM as we see. So when I
was preparing for this video, I was
actually mapping out all of the steps
and thinking of doing a pretty polished
video. But actually, I think there's
more value if you see my thinking
process as I'm going along with this
video, as I'm actually building it. And
I figured I I'm just going to do this
off the cuff. I'm going to make a bunch
of mistakes along the way. I going to
need to do to do some debugging. I think
that's actually much more valuable
rather than just giving you a list of
steps. Now, I have done this thing many
times before. I know what to do, but I
am still going to show you the process.
It has been a while since I've set up a
K3S cluster, so I figured I would just
approach it that way. So, we're going to
be setting up a Kubernetes cluster and
we're going to be using K3S for that.
Now, there's a couple of reasons why I
always recommend K3S to my students, at
least in the beginning. The first reason
is don't start with Cube ADM because
it's too intimidating if you're a
beginner and it's just going to lead to
frustration and you're not going to get
this, oh, I'm learning something. this
this feeling of um early mastery. You're
not going to get that and that's it's
really important to get some positive
reinforcement early in your learning
process. So don't start there. You can
move to that when you are a bit more
advanced. And the thing that I am
actually using in my home lab is Talos.
But again, I don't recommend going into
Talos Linux as a beginner because it's
much more valuable to actually have
access to your Linux OS. So, Talos
actually takes over your entire disk and
it only exposes an API. You can't even
SSH into Talos, which is why it's great
because it's completely secure. But if
you use K3S, you still keep your Linux
operating system. I'm running Raspian OS
on these pies now. And after my
Kubernetes cluster is set up, I can
still tinker with Linux. there are still
things I can do on that Pi besides
running a Kubernetes cluster. So that's
the reason why we choose K3S. Finally,
K3S is great because it's easy to set
up. It works out of the box and it runs
as a single single binary which is
amazing. Now we can get started. I have
my both pies set up and all I did was
just install the image and that's it.
Now I'm sshed into them. But the first
thing that we need to do is to set the
host name because as you see here, both
of my pies have the same host name and
that is always a bad idea. So this is my
Pi 2. So pseudo I'm going to change that
with pseudo systemctl
uh pseudo
host name ctl
set host name pi 2. If I now run host
name is pi 2. If I here write host name
You see it still have Raspberry Pi. So
sudo host name ctl set host name PI 1.
So now we have set our host names.
That's always a good practice to do. And
you see that my Pi is still it's still
called Raspberry Pi. But if I would exit
this session and then SSH back into it,
then we see that the host name is now
changed in the prompt. So it's just a a
prompt thing here. So I've changed my
host name. This is all working well. And
then now the next step is to just go
through K3, search for K3S and open up
the K3S documentation.
And literally this is how easy it is to
install K3S. We're just going to curl
this installation script
and that is going to download the
binaries and set everything up for us
that we need to set up. and we errored
out. So it is already showing us why it
errored out. Here is um the info line
here. It says you may need to add croup
memory one and enable memory if you are
using a Raspberry Pi. So that's very
convenient. So let's check out what this
boot command line txt is all about.
Do not edit this file. Okay, so
apparently on the Pi it has moved. So
let's open that and see what it's like.
Oh, and right away I see that it's
opened as read only. So it would not
make sense to make any changes here
because I I will need to you edit this
as root. So I'm just going to do sudo
like that. That's going to rerun the
same command. and it wants me to add
here croup memory and croup enable
memory to it. So let's just do that.
Then
croup memory and croup enable memory
like that.
Let's also do the same on the other Pi.
And actually let's just um reboot this.
But before I do it, I still see this
unable to resolve host PI one. And I
think that might be
yeah, this is probably the reason it
this is still set to Raspberry Pi. So,
let's set this to Pi 1. Like that.
Like that. Yeah. And now I I will re
reboot this one. And you you should see
in the back. Yeah, now the light has
gone off. So that's actually pretty cool
to see. So this one is now closed. Uh we
can do the same on this one. So we're
going to open the command line file. And
here I'm also going to add croup enable
memory like this.
And I'm also going to
edit the Etsy hosts file
and change this to
PI 2
like that. And let's reboot that as
well. So if I now go back to PI one,
host name, host name is set up properly.
And now we can try running the script
again.
It's still in my history. So I'm just
rerunning the same command.
And now we see that our K3S has started.
So let's see p sudo. I know that I can
just try cubectl get nodes. We see that
the at rancher k3s.yaml that is actually
set with root permissions. So let's do p
sudo cubectl get nodes. And here we go.
We see that we have a node called pi 1
which is a control plane. Awesome. So
now we can see how our second pi is
going.
This one is working too. Our host name
is set up properly.
And then we can add this one as a node
in the same cluster. So now we have the
control plane running here, but we want
to configure this one as a node in the
same cluster.
And
let's see in the documentation
in the quick start guide, it shows you
to install additional agent nodes and
add them to the cluster. Run the script
with a couple of parameters. So then we
need to get the IP and we need to get
the token.
The value of the token is stored at vari
rancher node token on the server node.
So let's check that out. Cat
run that as root. Here we go. Here's our
token. And our IP address is going to be
the IP address of this node of the the
control plane. Now I can figure that out
by running IPA and I know it by heart
but here it is. The IP address is this.
So we need the IP address and the token.
Now we have everything we need. So we
can copy this command like that. And I'm
just going to run um set- ovi because
then I can navigate here with vim key
bindings.
And here the location of the API server
of the control plane is going to be the
IP address of PI1. So 192.16810.39.
And the standard port for the Kubernetes
API server is 6443.
And then the token
is going to be
the value of this
like that.
Paste that in.
And now
I lost it. Okay.
Do it again. Actually,
sometimes I like to just V command txt.
I actually like to paste that in like
that. So now I can
take this value
39.
And the token
is going to be this.
Okay. So, this is the entire command.
If I then just take that.
Here we go. Here's our full command. So
curl 192.168 1039 this is the the
control plane this is the port where the
API server is running with our token
like that this should work
now it's downloading the binaries
starting K3S agent
and it finished. So now if I run the get
nodes command again, here we go. We see
that we have a two node Kubernetes
cluster running right in that little
cabinet behind me and we can start
going. That's how easy it is to get a
Kubernetes home lab going if you know
what you're doing. So if all of this
command line work, if that went a little
bit fast for you, then I understand. And
this is also why I always recommend
learning Linux deeply first before
moving to containers and Kubernetes. If
you watch this and you were not really
able to follow what I was doing, that's
a clear indicator that you're not ready
for a Kubernetes home lab yet. In that
case, you should focus on Linux first.
And this is also what I teach in
CubeCraft where we are landing DevOps
jobs every single week now with Brad
landing an SRE role, Frederick landing a
role, YaKob, Peter Leonard, he landed a
job in two weeks and they do that
because they follow my system where we
first go deep into learning Linux with
Arch Linux etc. That is an eight hour
course. Then there is an eight hour
Kubernetes fundamentals course and then
and only then do we move on to the the
Kubernetes home lab course. So now we
have Kubernetes running and that is
great. But to actually understand what
we're doing here and to um build a
secure home lab that is a bit of a
bigger undertaking it which goes beyond
the scope of a short YouTube video like
I'm making now. If you feel that this
one a little bit fast for you and you
want to have a slower approach with more
handholding and actually like deeply
going into each step along the way, then
I recommend you apply down below for my
program where you get direct mentorship
from me and where you get access to my
courses. Take the same process but then
spend like 16 hours explaining
everything that we have just done. So
apply if that sounds interesting to you.
So let's move on. We now have a
Kubernetes cluster running. Now we can
actually start running some applications
on there. And one application that I
love to do is LinkedIn. Now before I
start applying this, I do not want to
just keep doing pseudo cubectl on the
Raspberry Pi. We're going to do
something a little bit more mature and
we are going to set up a environment
from the local machine. I'm not going to
run cubectl from the actual servers.
That is not how you usually manage
Kubernetes clusters. No, these servers
are servers. The control plane exposes
an API and that means I can talk to the
API from my local machine. So what I'm
going to do is um first of all like I'm
I'm a pretty advanced user of this. So
if you run witch cubectl on my system,
you will see that I don't have cubectl
installed. I work completely from dev
containers and I'm just going to quickly
set up a dev container for this project.
So I'm going to um first run the
template command. This sets up a
template
called general with which contains a dev
container JSON etc. And I'm going to
move general to uh pi lab
vid like this. And then I'm going to CD
into that and I'm going to run devpod
up.
devpod up dot.
Now what this is going to do is it's
going to create a dev container which is
a completely isolated environment
running on my local machine here. This
is going to automatically install all
sorts of packages that I have defined.
Let me just accept my SSH key here. It's
going to clone all of my dot files. It's
going to set up my complete dev
environment with all of the things that
I want to have in that environment. So
now it's set up. I run def ssh and then
I can go into the pyab vid. Now I have a
full separate environment completely
separate from my host system. And here I
can then install cubectl. M use cubectl.
This is going to install cubectl and I
have a completely tailored dev
environment with with all of my Vim
configuration everything loaded in
already. Now I have cubectl available
but now we need to get the cube config
file right because if I run cubectl
get pods now it's not going to find
anything because we don't have any cube
config.
So if I then go to one of my notes and
what was it? CDC rancher
K3S
pseudo cat k3s atyl. Yeah, here we go.
This is the the cube config file. So I'm
just literally going to take this
and copy my cube config file.
And if I then open v
home directory
cube
wait cd
cd dot
wasn't it cube or maybe I have to make
it so make
so I go to my home directory I make a
directory called
cube
in there.
I'm going to create a file called
config.
And now it's actually going to install
lazy vim for me and set up everything
that I need in order to do my text
editing. And then I just paste in the
cube config file. And there's one thing
that we need to change, namely the
location of the server because on the
Raspberry Pi is configured to look at
local host. However, we know that our IP
address is actually 192.168.1039.
That is the location of the control
plane on my network. So now if I run k
get nodes
actually I have to go back into my
workspace here. Now I run k get nodes
and here we go. Now I'm able to talk to
my Kubernetes cluster from my isolated
dev container. So again to show you how
this works. If I do which cubectl
I see that in this environment I have
cubectl installed. This is running on my
Arch Linux system. But if I do which
cubectl
like this I have no cubectl on my host
system installed. That's because this is
running in a dev container. And now I
can create a tailored environment for my
Raspberry Pi home lab containing all of
the packages and code and everything
that I want to use for that home lab.
And this is very advanced, but it's also
part of the stuff that I teach in
CubeCraft. There's a full course on how
to actually install this and how to do
this. So if you want to learn that,
check out the link below. So now that we
have everything set up, I can talk to my
Kubernetes cluster from my local machine
with all of the tools that I want. So
now I'm going to actually run an
application. So the first thing that
we're going to do is we're going to run
LinkedIn. So if I do um the
okay so this is a YAML file here. We are
going to create a namespace called
LinkedIn and we're going going to create
a deployment that's called LinkedIn with
one replica and the image that we're
pulling in is sysroker LinkedIn 139 and
we're going to run this on port 1990.
Now this uh application
this is called LinkedIn and it's a
bookmark manager. So, if you do
LinkedIn,
it's going to first show you uh
yeah, here we go. This is the GitHub
repo. We're actually at version 142 now.
So, maybe I should change that. But it's
an open source project and it's a really
nice little example application to run.
So, if I
run this,
okay, apply f uh dab.l demo
it has created the deployment. So if I
do k get pods now then we see that we
have linked being created but actually I
realize now because I'm doing k get pods
it's actually doing that from the the
name yeah yeah it's doing that from the
default namespace actually I need to set
namespace here
to linking as well that's a bit of a
mistake but hey that's how it goes get
pods it is uh still creating so let's
give that a few seconds so let's check
on our pods K get pods and here we go.
We see that our pod is now running. So
now I can actually do a port forward to
see if we can actually log into our
LinkedIn application. So I have the
example command here.
So I'm going to make a small adjustment
here. I'm actually going to run this on
1990 on my local machine.
that. So if I now run
port forward and then
I have to
complete the name. So now I'm port
forwarding to the pod. And if I now run
localhost
9990, I should see my login page.
Here we go. Here's our LinkedIn login
page. And this is actually running on
the machine behind me here.
Great. So now we have a interface, but
now we also need to be able to actually
log into it. And that means that we have
to run a command that is going to create
a super user for us. So um this is the
command
and we're going to have to adjust it a
little bit. So, I'm going to just SSH
like I now have the port forward command
open in this dev container, but I can
just SSH
into a new one or have another session
to that container. And I can just do
more sessions into that. And if I paste
this command and
replace this with
the actual one, it is throwing an error.
But let's see if it actually worked. Let
me see. No, it is. It does not. Okay, it
looks like we have to upgrade to
LinkedIn.
Okay, so it looks like we have to
upgrade to the latest version after all
because it's throwing errors. So then
the first solution is just to upgrade to
the latest version. So
I'm just going to delete the the
deployment of LinkedIn. Then we're going
to
check our YAML file again. And I already
updated the name space here. But now we
saw that the latest version is
42.0.
I believe that was it.
Yes 42.0.
So let's then apply that.
Okay. So the deployment is running.
Okay. Config set context
name space
is LinkedIn.
Oh, set context default namespace
LinkedIn like that. So now we are doing
everything in the LinkedIn namespace.
Okay, get pods. Okay, LinkedIn is
running now. So let's see if we can run
that create super user command again.
Okay, now it is asking for my password.
So let's just do password.
I want to create it anyway. So now my
super user is created successfully.
And let's go to localhost 1990 again
which won't work because our port
forward is lost
because I have created a new pod. So
let's try that again.
Now we are forwarding again.
And here we go. Here is our LinkedIn.
And if I now do Misha and then password,
we're actually able to log into the
application. And it works. Now, this is
great. We have it working. There's just
one thing that we still need to do, and
that is to make sure that we can
actually log in without using port
forwarding. And for that we're going to
create a service. Now the great thing
and also why I recommend using um K3S is
that this has something called service
LB configured out of the box. So what
we're going to do is I'm going to create
a svc.yaml YAML file
and this is going to be a service which
we are going to deploy in the LinkedIn
namespace.
This is a load balancer type service
which is going to target the app linked
which is defined in our deployment file
with port 9090 and the target port 9090.
So if I apply this file,
oops, spelling mistake.
API version not set.
That is something that got lost in the
copy paste.
Okay, now the service linking is
created. So if I now do K get SVC. Now
we see that we have an load balancer
service but note it has an external IP
and this is very important. If you have
ever create tried to create a Kubernetes
cluster before you will know that
usually this going is going to lead to
problems that you won't get an external
IP. However, K3S does this out of the
box and this is super neat. So now if I
just enter the IP address
of the actual Raspberry Pi here
and port 1990, now I'm actually able to
log in on an IP address. And now I
could, for example, in my home lab
dashboard, I can actually now point this
to a service that's running on my
Kubernetes cluster behind me. And that
is one of the reasons why K3S is such a
great option because you can actually
just create load balancer servers so
that you can run applications on
specified ports on your Raspberry Pi. It
works great. Well, there you have it.
Now you have a Kubernetes home lab where
you can start building, experimenting,
and start learning the tools of the
trade of a DevOps engineer. This is just
the beginning. We haven't even touched
on exposing your applications to the
internet yet. security, persistent
storage, etc. All of that is done in the
Kubernetes home lab course that I use
for my students to land six figures
jobs. And you see it works. We are
landing jobs almost every single week.
And as people say, Leonard here, he does
this because he created a K3S home lab.
Another example is Danil who landed a
DevOps job straight out of university
and talking about his home lab and all
the little projects that he learned in C
cubecraft was the biggest part of his
interview interview process and he
barely even got any other technical
questions. So this stuff really works if
you just want to get going, get the full
system, plug and play and get started
right away, including direct mentorship
from me where you can ask me and the
community questions if you get stuck on
anything. Click the link down below and
you can apply and see if we are a good
fit. All the best of luck with your new
Kubernetes home lab.
Full Parts List & Hardware Spec: https://kubecraft.click/homelab-1-yt Apply to join KubeCraft & land your DevOps job: https://kubecraft.click/947ddc The Kubernetes homelab mistake no one explains: why your setup keeps you stuck even when everything “works” 🎯 You’ll Discover: • Why bare metal beats VMs for faster Kubernetes learning and real-world confidence • The common Kubernetes-on-Raspberry-Pi pitfall engineers miss (and how to fix it) • A practical end-to-end build: hostname hygiene, cgroup settings, K3s install, kubeconfig, Dev Containers, networking, and a live app • The mistake most engineers make: managing clusters from the nodes instead of a proper local toolchain 💡 Member Wins: “After 100+ applications, I landed a Senior SRE role. The edge came from my KubeCraft home lab.” – Eric, Senior SRE “My salary 3x’d after joining KubeCraft and applying what I learned.” – Jonathan, DevOps Engineer "Landed a Junior DevOps Engineer role moving from IT Support thanks to KubeCraft" - Maor, Junior DevOps Engineer 🚀 Want to learn faster and finally build a thriving DevOps career? Join 700+ engineers inside KubeCraft → https://kubecraft.click/947ddc 🎁 Free DevOps Career Blueprint: Learn exactly how to get hired in DevOps → https://go.kubecraft.dev/blueprint-yt-1 📚 Chapters: 00:00 Why a Kubernetes homelab helps you stand out in job interviews 02:07 Hardware philosophy: bare metal vs VMs for real learning 04:10 Mini rack, Raspberry Pi parts, SD imaging, and SSH access 07:22 Hostname hygiene and enabling cgroup memory on Raspberry Pi 10:05 Install K3s control plane and verify with kubectl 12:18 Join the second node with token for a two-node cluster 14:40 Pro workflow: Dev Containers, kubeconfig, and talking to the API 18:33 Deploying Linkding: YAML, port-forward, and superuser setup 22:05 Exposing apps with K3s ServiceLB and LoadBalancer Services 24:36 Next steps: Linux first, KubeCraft, security and storage Keep building, Mischa van den Burg – Senior DevOps Engineer, Microsoft MVP, 55K subs, 700+ private students Founder of KubeCraft, the #1 DevOps Career Accelerator & Community ★★★★★ #kuberneteshomelab #k3s #raspberrypi Alternative Titles (for the algo) Build a Kubernetes Homelab with Raspberry Pi (Step-by-Step Tutorial) How to Build a Kubernetes Cluster at Home with K3s and Raspberry Pi Raspberry Pi Kubernetes Cluster Tutorial (K3s Home Lab Setup) My Raspberry Pi Kubernetes Homelab Tour (Complete Beginner Guide) Build a Kubernetes Homelab That Lands You a DevOps Job How I Built a 2-Node K3s Cluster on Raspberry Pi (Full Setup Guide) Kubernetes on Raspberry Pi: Complete Homelab Setup in 2025 Build a Mini Kubernetes Rack for Your Apartment (Quiet & Practical) The Ultimate K3s Raspberry Pi Cluster Tutorial for DevOps Engineers From Zero to Kubernetes: How to Build a DevOps Homelab That Gets You Hired