Loading video player...
So yeah we will see uh the Gtops and we
can include with cub cube word and going
ahead.
So today we are going to explore a
really exciting transformation and how
can manage traditional VMs using
GitHub's principles. Yes Neil Sha I work
as developer advocate at middleware.
Middle is a fullstack observe platform
and apart from that I used to build lot
of devops communities. Apart from that I
also run CNCF chapter over here in my
local region and have given talks in 10
plus conferences including cubecon etc.
So vision what if if what VMs could be
managed like containers.
So how we will discuss that and we will
discuss how the different flows will
work. Uh consider imagine a system like
this. You define a VM in YAML. You put
it on Git and the Kubernetes
automatically provisions it. How easy it
is. So we have seamless. So this this
workflow provides a seamless automation
where everything is managed by pipelines
not people. We have declar declarative
configurations. So there is no drift and
no guess work because a lot of time we
do like it will work and kind of thing
but there won't be any guess work apart
from that get driven so every change is
traceable so if something uh wrong we
have gone in production then we can also
revert that and zero manual intervention
so infra just syncs itself
the vision is what if we can manage all
the VM through the g processes so the
githops promise and before that we can
see an example. So see you need to
deploy 50 Ubuntu VM for a QF
environment. Instead of manually doing
them in VS center, you can define a
manifest
in Git and you can commit that directly
and Argo CD will take care of that and
whenever if you need to update the OS
image of all the view Neil I'm so sorry
your presentation keeps flipping. Do you
mind reloading it for us?
Uh sorry what happened?
>> Your presentation is continuing to flip.
Um so I don't think our attendees can
really see it. Are you able to reload
your presentation?
>> Yeah, sure. Sure. So coming up to GitHub
promises uh for cube word. So GitHub
brings a structure related to VM
operations. So let's break it down in
different four steps. So we have
infrastructure as code. So how we find
that? So we have every VM specifications
like CPU, memory, storage, everything in
it. So you can create an example kind of
thing a virtual machine and define and
also open22 with this much codes and
kind of thing and you get a full control
version control and peer review and all
it is free for that. So your
infrastructure is properly. Next comes
the automated operations. So automated
operations is taken care by Argo CD.
Argo City watches your GitHub repo and
it push it pushes all the changes and it
can alo it can also increase the CPUs
and every kind of thing with Argo City
because Argo CD also reconialize it
automatically in Kubernetes. Next is
continuous reconation. So if someone
logs into a node and tweaks something
manually then what happens is Argo
deflects out it is out of sync and
reverts back. Apart from that uh it has
also save roll back. So safe roll backs
are so if you want to do a bad config
undo a bad config you can just revert a
commit and restores the previous state.
It's just like a githops doing all the
things for you. So it's like githops for
now your VMs are also part of the
automation loop here.
Next comes so we can if we see the
complete cycle because here in cube we
always consider all the VM. So if you
see the complete VM cycle how it
automates then the first cap is it
provisions.
So provisioning is spinning up new VM it
for example a team commits VM VM.L for
development
pro instantly
it configures. So you have a desired
network you have desires mention all of
them in your file and all the things are
updated there. Next comes update. So if
you change a image or let's say
specification of VML file for VM then
also it will also uh change up there and
you can also scans VMs horizontally even
vertically. So all the things can be
done and it also has option to live
migrate the things across different
host. So consider a system where if you
want to live migrate VMs to another node
for maintenance or scaling or ally it is
possible kubernet cube circube plus
argo.
So during a case your Q teams needs more
compute power instead of calling the
infront end they simply change their
applic
in in real world. So next next comes
some of the real world implementation.
So version control is there. So, so
version control store all the your
GitHub manifest
all the files of that it also have
different moments stand up there
and every configuration changees which
is al already have action arity can
watches all your repo updates it
triggers out different automation if new
commit change to VM back is there it
will sync automatically to cluster it
also provides a visual dashboard showing
which resource are in sync and out of
sync
There are also health checks, health
checks and auto remedation. So VM fails.
Let's say sheet is doing some automation
and creating new VM and if that is
failed what will happen it will alert it
will trigger an alert and you already
have that alert. You can also integrate
Prometheus orana with always get alerts
in your Mac or teams or anywhere.
Now going to multi-tenency. So multi
consistency. So one of the most valuable
lit benefit is environment consistency
you because everyone works in different
environment which development staging
production. So teams often face it like
it worked on my staging environment but
failed in production issue. So with
GitHub the same manifest can be reused
across environment because it already
have all the themes which you need in
your manifest file and you can use uh
that manifest with different nomads
using overlays or custom we know
different CNG. Then you can have in
development because in development we
use VM spec for faster iteration. In
staging what we do is we add extra
performance tuning and validation
checks. In production we have a proper
thing like enable high availability and
security policies. Let's say example a
banking team might run the same VM image
across environments but but different
classes there is SSD for production HDD
for dev everything still flows from the
same data brow.
Next comes
sorry configuration drift because
whenever a lot of changes happens then
we need to trigger that and we also need
to maintain them. So configuration drift
is one of the silent killers of
reliability because in traditional
setups engineers tweak VM when manually
maybe update a config or add a disk or
forget to document it properly. 6 months
later you have 100 snowflake servers
that no one can reproduce. Gittops
solves that. Coming up Dtops it can also
detect the detecting drift. Argo city
shows out of sync state visually
alerting teams. It sends alerts to ST or
email. It also remediate automatically
auto sync restores the defined state. If
you click on auto then
then it will al
see all the unsaved to the current
product one have auditing changes every
modification is tracked in GitHub
history. So if someone let's say example
if someone SSH into VM and changes the
kernel parameter manually arbush
deflects it within seconds and resets it
to the version which is mentioned in g.
So everything which is mentioned in g
will always stay there. Next comes a
major thing is safe roll back to non
good state. When something goes wrong
roll back should be instant not a fire
drill. We know we should not spend much
more time. It will be it should be a
very smoother process. Consider a
scenario. You push a new business that
causes application crashes. Here's what
what happen next. You monitor that
detects the degraded and also degrade
the application. You identify the late
last table committed get run get reward
push
it sings the environment back to table
configuration automatically. Within
minutes yours are back to normal. No
need for snapshots. No need for manual
restores. Just just get doing the magic.
So you can consider we are not taking a
lot of backups. We don't rely on lot of
backup. We have already get is also
traceable. So we roll back to it was
working.
Next comes um modernizing
infrastructure. So modernizing
infrastructure there are two three
different things which we take care of.
So one is legacy workloads. So legacy
nowadays people you can lift and shift
shift existing virtual machines like
legacy Java app rewriting a single line
of code.
So integrating cube world plus with argo
CD we have we have the leverage of doing
that transforming the legacy application
into modernization.
Next comes hybrid platform. You can run
both containers and VMs in the same
Kubernetes cluster orchestrated by the
same control plane. Next is unified
devops CI/CD pipelines githops workflow
and auxiliary dashboard can cover
everything separate systems for VMs and
let's say example I had an example we
were dealing with a fintech client and
we were running it and gway on VM and
it's in containers. So with cube war can
now in the same githops workflow you can
manage monitor version automatically
track recon everything with the same
workflow because githops gives you a
a new achievable parameter where you can
have your container you can have your
vms which are trackable and scalable and
you can manage them at a single point.
So coming to the some of the key
takeaways that how we consider we should
consider them and as a in production. So
3DMS code to wrap up things up uh we
will see how defining everything
declaratively and good for auditability
and collaboration. Next is automate
everything. Let Argo City handle life
cycle VM life cycle like provisioning,
updating, scaling, migrating VM for
different application. Next is eliminate
drift continuous reconolation ensures
the real state always matches the one.
The gap kits brings cloud into
operational infrastructure. It is
brushing out. If you have legacy thing
then you have to become a modernized
application handling all the thing
through cube word aroc cubin
uh yeah that's and just giving a brief
and then how how closing out so for VMs
is not just a modernization tactic it's
a mind shift mind mindset shift because
it brings the same power that transfer
kubernetes operations into world of
virtual machines
If you're running coup or exploring
githops workflow start small maybe
automate some of the VM flit and
gradually expand I would love to contri
continue the conversation if you have
some questions you can ask in the chats
and later you can connect also me on
LinkedIn uh it's Neil you can also share
in the comment section yeah thank you
for this and I have like one or two
minutes for Q&A if anyone's have but I
will also share the presentation with
with yeah it's for if anyone has
question you can ask here
yeah
Githops is the next big thing which most
of the people nowadays feeling of that
so in the chat section uh eager to tell
like VMs it was just example just
telling about scaling and checking about
VMs
There is also a question in the Q&A.
I'll just read it out. How hard are data
volumes in the GitOps flow? They were
kind of in the way last time I tried.
>> Uh, pardon please. Where uh, can you
where it was in the question?
>> Uh, it's in the Q&A function. How hard
are data volumes in the GitOps flow?
Uh-huh. So data volumes majorly there
are external things how you attach V
data the different data volumes on VM.
GitH workflow majorly helps you out to
automate everything infrastructure VMs
kind of thing and it is not very much
hard because you declare all the things
in your the main manifest file where uh
in the which is mentioned in grid. So it
is not very much hard because you
already mention everything in your
manifest file in in your G. So it is
like pretty much easy and sometime
people what I see is use external
storage kind of thing if they use
storage in let's say buckets. So people
are also sometime also using up that
hope it helps out. So continuous uh
VMware replication.
So now nowadays if you see a lot of live
live container migration are happening.
Similarly live VM migration are also
there. If you see a lot of tools are
helping out that and similarly it is
similar kind of process where if you
migrate from one cloud to another
similarly the containers are also
migrated. Similarly the VMs are also
migrated.
>> Um that brings us to the end of our time
here Neil. Thank you very much uh for
that talk. That was great.
>> Um if anyone has any questions uh the
Q&A function is only partially working
unfortunately. So we've got the um
virtualization chat in Slack in the
Kubernetes workspace and you can find
Neil on I believe the CNCF workspace.
>> Um
>> yeah even I can share my LinkedIn anyone
have you can also directly. Thank you.
>> Perfect. Thank you very much, Neil.
From VMs to GitOps: Managing KubeVirt Workloads Declaratively with ArgoCD - Neel Shah What if managing virtual machines could be as seamless or automated as deploying a containerized application? In this talk, we shall discuss how to bring the power of GitOps to KubeVirt by managing VM workloads in a declarative fashion using tools such as ArgoCD or Flux. This talk will show how virtual machines can be treated as code with changes tracked in Git. And with ArgoCD, we can now automate all possible VM life-cycle operations such as provisioning, configuration, update, scaling, and live migration. The session will showcase real-world examples of versioning the VM manifests, enforcing consistency across different environments, and safely rolling back to a known good state. Whether you are modernizing legacy infrastructure or building a hybrid platform that mixes VMs and containers, this session will show you how GitOps principles can make the life of operation teams easier, fight drift, and bring VM management into the modern DevOps era. Don't miss out! Join us at our next Flagship Conference: KubeCon + CloudNativeCon North America in Atlanta (November 10-13). Connect with our current graduated, incubating, and sandbox projects as the community gathers to further the education and advancement of cloud native computing. Learn more at https://kubecon.io