Loading video player...
[music] In the world of Kubernetes, we
have such a large variety of tools,
especially when it comes to CI/CD. There
are many different tools to choose from,
all with different pros and cons. When
it comes to setting up software delivery
pipelines in Kubernetes and multicloud,
we think of CI/CD. Think of all the
components you need to build software,
test and package container images, run
tests, security scans, then deploy to
dev, staging and production. So the
process generally starts when developers
commit changes to source control. Then
we need some popular tools to build,
test and package. These popular tools
are GitHub actions, Bitbucket Pipelines,
Team City, Jenkins and GitLab CI. When
we think of CI/CD, we think about these
tools to build pipelines. But we often
forget that these tools are mostly great
for building CI pipelines. CI and CD are
two different things. Continuous
integration is a software development
practice where software developers push
their code changes to source control
which triggers builds and tests. Once
that is done, the deployment can happen
which introduces more tooling. Now the
challenge here is that many of these
popular tools are really good for CI but
not so great when it comes to continuous
deployment. Many of these tools either
require third-party or custom workflows,
custom plugins or additional tooling
which lacks a lot of features and this
causes engineers to resort to writing
custom scripts. So this is where we end
up writing custom bash or custom
PowerShell or custom infrastructure as
code to deploy to things like AWS, Azure
or any cloud provider or even trying to
integrate with things like cubectl or
Helm for Kubernetes deployments. Many of
these tools focus to solve just one
problem. They might be great for
Kubernetes, but not for anything else.
And that's a challenge for when you're
running in multicloud environments. This
is where Octopus comes in. Octopus
specializes in continuous delivery and
is a centralized platform that allows
you to deploy to Kubernetes, multicloud,
onprem, and pretty much anywhere.
Instead of having all these custom
scripts, plugins, or more tooling to
integrate CI with CD and to deploy to
multicloud environments, Octopus
provides a central platform that
integrates perfectly with your CI
solution and takes care of everything.
when it comes to continuous deployment.
Today's video is sponsored by Octopus
Deploy. It's a tool that I've used for
many years and I'm super excited to see
how it deals with modern cloudnative
platforms like Kubernetes. Today we
skill up our DevOps portfolio by taking
it for a spin. We have a lot to cover,
so without further ado, let's go.
[music]
>> [music]
>> Now, from the Octopus Deploy website, we
can jump straight into the
documentation. The documentation has a
great getting started overview page
where they cover what Octopus is, but
more importantly, all of the different
terminology such as projects,
environments releases deployment
processes variables infrastructure
life cycles, runbooks, and more. What
we'll be doing is dive straight into the
installation section. And here we'll
need to learn about all the important
octopus components. Now, the octopus
server can run absolutely anywhere.
Because it can run in a container, it's
highly portable, which makes it very
easy to run. It requires a SQL database
for storing important data as well as a
volume to persist important files. So,
you could use a volume mount and you can
also run the SQL database in a
container. So, you have all this
flexibility to self-host. You can run on
a VM, you can run on a container
instance in the cloud, you can run on
Kubernetes, but there's more. Now,
Octopus has a fully functional free
tier. So, you can actually use it with a
ton of features and it's free forever.
And if you don't want to host all of
this yourself, there can be a lot of
benefit to have a look at the Octopus
cloud solution where you can also have a
fully functional free tier without
having the complexities of hosting all
of these components yourself. So
everything can run on Octopus Cloud and
they basically take care of hosting all
of these components for you. The
installation page has many guides on how
you can run Octopus. So you could run
the Octopus server outside of a
container, for example, running it as a
Windows service on a Windows VM or for
the most portable solution is to run it
in a Linux container. So we will go
ahead and take a look at that. But
firstly, it's important to know that
Octopus has what's called a master key.
When you spin up an Octopus server,
Octopus uses a master key to encrypt
important sensitive data. So if you ever
needed to make a backup and restore from
that backup in the future, you would use
the master key to decrypt the data. Now
to run octopus in a Linux container, you
can simply use this oneliner, a docker
run command. You can also create a
docker compose file and run it
declaratively or you could run it in
something like Kubernetes. So to run the
octopus components, I've left all my
settings in the source code. I'll put a
link down below so you can follow along.
Firstly, what we'll do is set up that
master key. I'll generate this using
OpenSSL. Then I'll set my Octopus free
license key that I get from the website.
I'll set up my initial admin username
and password as well as email. Once I've
set these environment variables, I can
go ahead and run my Octopus database. So
for this, I'll use Microsoft SQL Server.
And there's a Linux container available
for that as well. I'll run that using a
simple Docker run command. and I'll
mount all my SQL data under a data
directory locally and I'll put it in an
MSSQL folder. I hop over to the terminal
and I run this command and that will
start my database server in a container.
Now all I need to do is point my octopus
server to that container. Now we can go
ahead and use docker run to run our
octopus server. I run the container in
background mode. I give it a name
octopus. I run it on a network. This is
the same network where my SQL container
runs. So they'll be able to connect to
each other. Then I expose a few ports.
These container ports are well
documented on the Octopus website. Port
8080 is for the API and the web portal.
8443 is for gRPC clients. 443 is for TLS
or SSL for the API and HTTP portal. And
10943
is the port for polling tentacles. Now,
octopus talks to your environment using
what's called agents. In octopus lingo,
it's called tentacles. [music]
Now, octopus can either talk to the
agent or the agent can talk back to
Octopus. So, there are different
networking configuration options we'll
talk about in this video. This means
Octopus can match your network security
setup. Then, we have our admin username
and password. We have our octopus free
license. We're passing in the master
key, the connection to the database that
we've just created, and a bunch of
important file mounts that Octopus
needs, and then we pick the container
image we want to run. Now, I can go
ahead and paste this to the terminal,
and my Octopus server will be starting
up. One of Octopus deploys big strength
is its highly mature dashboard. In this
day and age where everything is stitched
together with YAML and held together
with a bunch of open- source components,
for some, especially for larger
organizations, it feels highly
fragmented and decentralized and this
can be a challenge. Now, the Octopus
dashboard centralizes it all and you can
immediately go and deploy your first
application by creating what's called a
project. You give it a name and you
specify where you want to deploy to. To
do that, we'll take a look at the
meaning of some of the terminology. So,
what a project is, what an environment
is, and how to create a deployment
process, what are deployment targets,
and what are releases, and how to deploy
them. The sidebar has everything you
need, starting with projects. This
allows us to define our deployment
processes and group them. An important
section is infrastructure which covers
everything we can go ahead and deploy to
and other important management
components we'll take a look at. So
before we go ahead and create a new
project, let's go ahead and take a look
at connecting our infrastructure by
creating what's called an environment.
Now environments in Octopus are pretty
much self-explanatory. It's a way for us
to group infrastructure we want to
deploy to. So I have a dev Kubernetes
platform and I have a production
Kubernetes platform and I'll want to go
and set this up under environments. Now
you can easily go ahead and set this up
by going to the infrastructure menu and
clicking on environments and then we can
go ahead and add new environment. We can
call our environment development. Go
ahead and save that. With our
development environment created, we can
go ahead and add what's called a
deployment target. Now a deployment
target in Octopus could represent
anything that we could deploy to. This
could be a physical server, a virtual
server, it could be a Kubernetes
cluster, it could be a Lambda function,
it could be onrem or in the cloud. It
could even be something like a SQL
server or database. So for our
development environment, let's go ahead
and add a deployment target. And here
you can see a target could be something
like a Kubernetes cluster, Linux
machines, Windows machines, Mac. It
could be cloud resources like Microsoft
Azure, AWS, or even offline deployments.
One important thing to take note of, if
I select the Linux target, you'll see a
listening tentacle as well as a polling
tentacle. Octopus generally has two
types of connectivity with targets to
support your network setup. So, two
things. Either the octopus can talk
directly to the agent like your server
or the agents can pull the octopus
server. You can set this up to match
your security posture. So I'm going to
go ahead and connect my Kubernetes
cluster. And the recommended way is to
use the Kubernetes agent. Here I can
give it a name. I can just call it K8s
dev. I select my environment I created
development. And here I can add tags. I
can create a tag called K8s. I can
create one called Linux. I can create
one called US East like so. Tags are a
very smart way to group certain
infrastructure together so that we can
filter them based on certain criteria.
So let's say I want to roll out a
deployment across US East. I can select
all the deployment targets that match
the tag US East regardless of what that
type of target is. Let's say I'm
targeting updates across Linux virtual
servers. It doesn't matter whether that
server is part of a Kubernetes cluster
or not. I can then roll out a deployment
process that targets the Linux tag. Or I
want to update targets where the
operating system is Windows. Or I'd like
to run a SQL script across all SQL
targets. This way I can filter my
deployment process using tags. So to
visualize this, the octopus deployment
server can perform deployments to many
different targets. And these could be
Kubernetes clusters, they could be
Windows servers, they could be Linux
servers or databases or anything. On my
Kubernetes cluster, I could have tags
such as K8s that identifies that target
as a Kubernetes cluster, but it may be
part of the US East region. So I may
want to filter based on that. It also
has the Linux tag. So if I wanted to
perform updates across all of my
servers, I could target Linux. I could
filter clusters based on regions. So I
could have like at the top here, US East
and at the bottom I could have US West.
I could filter out based on the
operating system. So I could filter for
Windows or Linux. I could also have a
Windows agent that has a SQL database
and filter based on that. tags allows us
to avoid hard coding our environment to
our deployment process. So once I've set
up my tags, I can go ahead and click
next. Then it'll prompt me to install
the NFS CSI driver to my cluster. This
is a basic Helm chart that will deploy a
storage requirement to our cluster. This
creates a shared storage for Octopus to
use so that we don't have to set up
volumes within our cluster. So I can
just go ahead and copy this command and
pointing to my development environment.
I can go ahead and run that command.
That'll install that dependency in my
Kubernetes cluster. Now with that
installed, I can go ahead and click
next. And this will provide us with the
helm install command to go ahead and
install the Kubernetes agent to our dev
environment cluster. You can switch this
to either bash or powershell depending
on your environment. And go ahead and
copy this command. As you can see, this
runs the Kubernetes agent image. It sets
a couple of important things like the
tags we mentioned earlier, which
environment this one belongs to, IP
addresses of the Octopus server,
authentication details, and basically
the Octopus server will now wait on this
page and wait to establish a connection.
So, if we jump to our terminal and we go
ahead and paste that command, that'll go
ahead and install our deployment target
to our cluster. With that running in the
background, Octopus will automatically
detect that the target has connected.
And we should see all of these checks
turn green. We can see it's established
a connection and now it's performing a
health check. And if we give it a
minute, we can see our deployment target
is now healthy. I can then go ahead and
close this page. And I can repeat this
for my production environment. Add an
environment, call it production. Go
ahead and add our deployment target. Add
a Kubernetes polling agent. Give it a
name and add it to my production
environment. Go ahead and provide my
tags. Copy the same CSI driver helm
command. Paste that to the terminal.
Making sure I'm pointing to my
production cluster. And with that done,
I can click on next. This time I can
take the helm upgrade command for my
production cluster. Go ahead and paste
that to the terminal. We'll wait for
that connection to be established. And
after some time, our production
deployment target is healthy. And now
over on the environments page, we can
see the breakdown of our development and
production environments with our
deployment targets and their statuses.
Next up on the left hand side of the
menu here, we have machine policies
under environments. This page allows us
to create machine policies that define
various policies for dealing with
deployment targets like health check
intervals, machine connectivity,
updates, and cleanup. And then on the
left here, we also have machine proxies.
Octopus has support for the use of
proxies for two use cases. The first one
is you can use a proxy server for
octopus to allow it to communicate to a
deployment target. You can also use a
proxy for a deployment target and an
octopus server to make requests to
external services. So you have control
of all communication. And with
infrastructure, you also have the
ability to define workers. Workers are
machines that can execute tasks that
don't need to run on the octopus server.
So think of things like publishing to
Azure websites, deploying AWS cloud
formation templates, deploying to AWS
elastic beantoalk, Amazon S3 uploads,
backup databases, performing database
schema migrations, or configuring load
balances. You technically don't want to
run these types of tasks on your Octopus
server. So you can use things like
workers. Now that we have control of our
infrastructure, we can go ahead and set
up a deployment process. We define our
deployment process in what's called a
project. A project contains a process
with a list of steps. It contains
settings such as variables, secret
injection, everything you need to define
a full deployment process. So under the
projects tab on the left, you can go
ahead and click add project. And here we
give our project a name. I have a
service called service A. So I'm going
to set up a project for a deployment
pipeline for service A. Now, I could
select what I want to deploy to, but I'm
just going to pick other because I want
to set up a custom project and take a
look at the features. So, I click create
project and this will take us to our
project landing page. The first thing to
note under the project is the process
editor. And here we can add steps for
our deployments. To give you an idea
what steps are, you can see a bunch of
featured steps over here. You could add
a step to go ahead and deploy a
helmchart. Deploy some Kubernetes YAML.
You can run a script such as a
PowerShell or bash script. Run a script
against Azure cloud or AWS. You can
deploy to IIS, so Windows web services.
Or you could deploy any type of package.
When you add these, they basically
appear as steps in your process. And you
can have a number of steps. I'd like to
think of these steps as pre-built
modules. And as a platform engineering
team, you can even build your own and
standardize them across your company. So
you can provide these steps as building
blocks for your company's deployment
pipelines. Think of a platform
engineering team that builds Terraform
modules that other teams can reuse or
develops capabilities as a service to
other teams. Octopus provides step
templates as this capability. So you can
build your own steps just like the ones
we've seen here. So let's go ahead and
build a Kubernetes deployment process
and create templates that teams can
reuse and set up building blocks for
future pipelines.
So to create these reusable building
blocks on the top left we have the
deploy menu and at the bottom of this
menu we have script modules. If we go
ahead and click on that script modules
allows us to basically write building
blocks for steps. So think of scripts we
want to reuse like PowerShell or bash
scripts. We could create these as
modules and then plug them into steps.
To show this in action, I'm going to go
ahead and create a script module. And
something that we'll always be doing in
Kubernetes is create a namespace. So I'm
going to create a new script module
called create namespace. And this script
module will create a namespace if it
doesn't exist. And if it already exists,
it will do nothing. So in my source
guide that I've linked in the
description, you'll find the script
modules that I've documented. So we're
going to create one called create
namespace. And this is what it's going
to look like. It's just a bash script.
So it's a function that can be reused.
It takes in a namespace. And basically
what it does, it runs cubectl get
namespace. And if there isn't any, it
will go ahead and create the namespace
and echo that. Else it will just echo
that the namespace already exists. So, I
can take this bash function and on the
octopus UI under the body section of the
script module, I can toggle this to make
it bash. And you can see it gives you an
example of how to write a script module.
I'm going to replace this with my own
function. Then on the top right, I can
just go ahead and save that. Now I have
a script module that can be reused. And
this is great for platform engineers.
This prevents us from reinventing the
wheel. Now we can go back to our project
and set up a step template and use our
script module. So what I'm going to do
is under the process just say create
process. Then we want to add a step. The
step that's going to work for us is the
run a script step. So I'm going to go
ahead and click add on that. And this is
the process editor. You'll see all the
steps that you're adding will appear on
the left hand side. And whichever one
you select, you can go ahead and
customize it on the right side. So
you'll see things like giving it a name.
I'm going to rename this as create a
namespace. My script source is an inline
script. I'm going to toggle this to bash
and I'm just going to call the script
module I've defined earlier by sourcing
it. And then I can say create namespace
and let's create a product namespace.
You can also reference things like
packages. It's important to know that
these script sources can be inline. They
can also refer to a git repository.
We'll take a look at this in a bit. as
well [music] as a package. We can add
references to packages over here as
well. And execution location, where do
we want to run this? In our case, we
want to run it on a deployment target.
So, it's important to start filtering on
which tags we want to target. So, it's
important here that we run on K8s and
use our Kubernetes tag as we don't want
this to run on a non-Kubernetes [music]
deployment. And we'll leave all the
other settings as is. So, then we go
ahead and click save at the top. And
that is our first step done.
Now instead of hard coding values like
the product name space in our step,
octopus allows us to set things such as
variables. Variables are very useful
because it allows us to give dynamic
[music] values and change values without
having to change our deployment process.
So on the project page on the left hand
side you'll see project variables. Now,
it's important to know that you can
scope variables to a project, but you
can also scope them globally. So, you
could define a variable called namespace
globally that all deployment pipelines
reuse and inherit. But you can also
override this with a project level
variable. So here I'm going to go ahead
and create a new variable. I'm going to
call my variable name space and I'm
going to give it a value called product.
Notice that we can change the type of
the variable here as well. So you can
set things like cloud accounts,
sensitive values, certificates, and so
forth. We can also define a scope for
the variable. We'll take a look at that
in a bit. So just ignore that for now.
And go ahead and save. Now to consume
this variable, we need to update our
step. So let's click on the step. Click
on the inline source code. And I've
documented this in my guide. How to
refer to variables. Basically, you'll
just use the syntax. So you'll execute a
function called get octopus variable
with the name of the variable. So we'll
go ahead and copy this bit. Go back to
the inline source code. Remove this
product variable. So we're no longer
hard- coding it. And we execute that
function instead. Then head over to save
and we should be good. Now these script
modules are by default not inherited by
all projects. We have to include them.
So if you just go to the process page
and look at the right hand side, you'll
see that no script modules have been
included yet. So click on include and
select the module that you'll like to
include into this project and click
save. This allows development teams or
the teams in charge of the deployment
pipelines to explicitly include modules
that they'd like to use.
Now, just like we can turn scripts into
reusable script modules, we can also
take a step that we built and turn that
into a gold standard. This is where step
templates come in. So, you may want to
create a step template where you can use
a pre-built step to perform a common
task like deploying a config map to
Kubernetes. In the body of the step
template, we can point it to a Git
repository or an inline script or a
package. I set this over to bash and I
paste my inline script. Here I am
referring to a name space as inputs, a
config map, a JSON body as I'm enforcing
my teams to use JSON for configuration.
I validate my inputs and then I say
cubectl create config map. I pipe that
out to cubectl apply. With the script
body in place, you can go ahead and
define parameters. And here I'd like to
add a parameter called config name. I
can give it a label and some help text.
I can change the control type just to a
single line text box. You can change
this to a secret like a sensitive value
or a multi-line text box as well. And
you can provide a default value. Go
ahead and click add. And that allows
developers to pass in the name of their
config map. And I'm going to do the same
for namespace. Add a namespace as well
as a config body. But for the config
body, I'm going to make it a multi-line
text box and click on add. So now I've
got my generic inputs that developers
can pass to the step template. Once I'm
happy with my step template, I go ahead
and save it. Let's go ahead and add
another step template also to run a
script. And this time I'm going to
create one to deploy a secret to
Kubernetes, which is also a common task
for developers. I head over to
parameters. I click on add. And I put
the same kind of parameters here. That's
the name of the secret. Another one for
secret name space. And they're all
single lines of text. And then I'm going
to add the secret body. And then under
steps, I'm going to add inline script.
Switch it over to bash. And that's going
to be my bash script. So I'm going to
take in the inputs we defined at the
top. Validate the inputs. Convert the
body to a B 64 cuz that's what
Kubernetes expects. And do a cubectl
create secret output that is YAML and
pipe it to cubectl apply. So the command
is item potent. Then I go ahead and save
my step template. So now I've got two
gold standard step templates that my
platform team manages and can be reused
by my development teams to build their
pipelines. Now just as a side note, the
enterprise version of Octopus supports
an entire platform hub. This means
platform engineers can take these
modules to a whole new level. It focuses
more on things like policies and process
templates which focuses controls around
versions, compliance and governance for
enterprises. So now that we have the
building blocks, let's go ahead and use
these step templates in our process. So
under service A under process, we can go
ahead and add a step. Now, under the
list of features steps, you'll find the
library step template section which
contains the custom templates we've just
set up. So, I can go ahead and add a
config map and this will be similar to
the step we added before. We can
override the name. We can tell where the
execution location will be. We can set
up tags. So, we're going to remember to
select Kubernetes here. And here we can
provide inputs. So, I can pass in a name
of my config. I'm just going to call it
example config. I can refer to variables
we've created earlier by clicking the
insert a variable button and I can
select namespace. So this will
automatically bind to the project level
variable that I've defined. I can then
go ahead and set the config body and
then set this condition to run for any
environment. Then I go ahead and save
that. Now we have our second step in our
process. Let's go ahead and add our
secret step as well. Head down to the
library step templates. Select the
secret step. So we can again override
the name or keep it the same. Define
where the execution location will be.
This will be on the deployment target
and we'll target the K8s the Kubernetes
cluster. We'll call our secret example
secret. Map it to the namespace
variable. Now for the secret body, what
we can do is map this to a variable. So
I'm just going to go ahead and hit save
on that. Then head over to project
variables. And then I'm going to create
a new variable called secret. And I want
to show you how you can inject sensitive
values but also use the scoping feature.
So for this value I'm going to enter my
development secret. Notice that my API
key says dev secret 123. And then I'm
going to click on scope. And what you
can do is you can scope by tag targets
processes and even environments. So here
I'm going to scope this to the
development environment. That means that
this secret value is set specifically
for when I deploy to the development
environment. I can add another value.
This time I can add the production
secret and I can change the scope of
that one to map to the production
environment. So you can see how useful
variable scoping is. You can set
multiple variables, scope them based on
different criteria. So it becomes really
dynamic. Then I go ahead and save this.
I can now go back to my process and go
back to my secret deployment step and I
can change the secret body and this time
I can map it to the secret variable and
go ahead and save that. So Octopus can
handle sensitive variables as well but
it's important to know that Octopus also
integrates with things like Vault. So
you could use something like Azure Key
Vault and it's always best practices to
store sensitive data in something like a
vault outside of Octopus. So hopefully
now you get the idea of the flexibility
of this and how you can build reusable
deployment modules and quickly build out
a deployment process. Next up I can show
you how to deploy Kubernetes manifests
to the cluster. So what we're going to
do is add another step and this time
under featured we're going to pick
Kubernetes YAML. So go ahead and add
that step. Here we can also change the
step name. I'm going to keep it as is.
Target tags we're going to set it to
K8s. YAML source. We can set it to a git
repository. You can also deploy inline
YAML or a package. So here you provide a
repository URL. I'm going to go ahead
and paste my GitHub repo in there. And
Octopus provides all the supported
mechanisms of authentication. So we can
set up Git credentials here. We can say
add Git credentials. We can give our
credential a name. I'm just going to
call it GitHub. You can give it a
description. And then you can go ahead
and enter credentials such as your
GitHub username and things like a
personal access token. You can also set
up only allowed repositories. Then I'm
going to go back to the top. Go ahead
and save that. I'm going to point this
to a feature branch. So you can use
branching strategies here as well. And
then we specify file paths. Where does
Octopus need to look for Kubernetes
YAML? So here you could use variables or
you could use like wildcard pattern
syntax to say apply all the YAML in a
specific folder using wildcard syntax.
I've set up some paths where I have a
deployment, a service and an ingress.
There are two cool important features I
want to point out here. The first one is
this Kubernetes object status check.
I'll show you this once we do a
deployment, but basically Octopus can
look at Kubernetes objects to make sure
they're running successfully. A lot of
times with traditional deployment
pipelines, after a deployment is
triggered, you have to go to your
Kubernetes cluster to make sure the pods
are running and healthy and the
deployment tool generally doesn't know
whether it was successful or not. Like
what has happened to the pod after
deployment. Most tools like Jenkins or
GitHub action, they'll get an okay
because the YAML's been applied, but
they don't check whether the pod is
actually running and healthy. So, we'll
take a look at this in a bit. So, we're
going to leave that enabled. And then
the next one is structured configuration
variables. I'm going to turn this on.
This is a very smart feature which
allows octopus to be able to define
variables and use it inside JSON YAML
XML and property configuration files. So
think of we have YAML files like an
ingress, right? And let's say the
ingress has a domain name, but the
domain name is different between dev and
prod. Or maybe your replicas is
different between dev and prod. You may
want to inject values from variables
into the yaml. And you can do this with
structured configuration variables.
Octopus allows us to perform variable
replacement. So basically inside JSON,
we can use this syntax to create an
octopus variable name such as app colon
port. And if you have JSON or YAML that
looks like this, you can have the
variable value like this and it'll
inject it. So you can see the output on
the right hand side. This allows you to
override the port here with a custom
port in the output. The same thing
applies to YAML. So I'm going to go
ahead and leave that checked. Then I'm
going to go ahead and select the name
space. Just use my variable mapping. And
that's all good to go. I'm going to go
ahead and press save to show you the
structured configuration option. What
I'm going to do is head over to my
project variables again. And in here,
I'm going to create a special value with
the syntax of the name that specifies
the host I want to override in my
ingress. So if we take a look at an
ingress YAML, I have something like
ingress name as example ingress. We have
spec. Under spec, we have rules. Under
rules, we have one host. And this is a
host or DNS or domain for my local
testing. I want to override this host
with a development or production
environment domain. So in octopus I
create a variable with the style and
then I give it a value for my dev
environment. I define a scope to say
scope that to development and then I can
add another one which could be a.com
address and this one I scope it to
production and I go ahead and save that.
So you can see all the flexibility with
variables and scoping that allows us to
customize our deployment steps. So now
we're ready to deploy.
Now to perform an actual deployment
using Octopus, we have to create what's
called a release. A release is a
snapshot of a deployment process, the
steps, the variables, and the packages
we want to deploy. And we give a release
a version number. Now, because the
release is a snapshot, it means that we
can go ahead and make changes to our
deployment process without impacting a
current deployment. This is because all
of the variables, steps, process, and
everything has been snapshotted. So,
it's a self-contained isolated release.
So, to create a release, it's very
simple. On the project dashboard or
process, you'll see a create release
button. There's also a releases page
where you can see all the releases. So,
I'm going to go ahead and click create a
release. You can see a release gets a
version and I'm going to go ahead and
save it. Under here, you can see all the
different settings of the snapshot. Now
that we have a release, we can go ahead
and click deploy. Now, notice when I
click on deploy 2, I can only select
development. This is because Octopus has
another feature called life cycles where
we can add life cycles that control how
deployments or releases flow through our
environments. So in this case the
default life cycle states that we have
to go to development first before
production. We cannot skip the life
cycle and our project is tied to this
life cycle. You can also create custom
life cycles for things like a hotfix and
you can map that life cycle to something
like a specific type of package or like
a git branch. So only the hotfix branch
can bypass the development environment
and allows hotfixes to bypass and go
straight to production. So that's an
example of a life cycle. So that being
said, we can go ahead and select the
development environment to deploy to and
I can click on deploy. Now the cool
thing about the deployment is that it'll
show each of the steps. Each of our
steps will be listed and we can click
into them to see the logs. So the UI of
Octopus gives us a great indicator of
what's happening behind the scenes
during a deployment. So as an example
here we can see our namespace has been
created. The target it's been created
on. I can click this and I can see the
logs. So I can see the output of my
script module over here. The namespace
was created successfully. The same thing
for the config map, the secret and my
deployment to Kubernetes. And there we
go. Our deployment is complete and the
life cycle now allows us to go to
production. Now, generally, you would
have to then run to your Kubernetes
cluster to go ahead and look at the pods
and see if everything is okay. Now, with
Octopus, it's important to note that I
don't have to do this. As someone who
might be non-technical, a QA person, I
might want to come into here onto my
project dashboard and tick the live
status view. And you should see this
kick off. So, now Octopus will actually
go ahead and check our development and
production environments. We can go ahead
and click on this. and Octopus will
check all of the objects that's been
deployed. We have a deployment, an
ingress, and a service. Now, because I
don't have an ingress controller here
yet, this will status progressing. But
let's take a look at our deployment. We
can expand that. We can see the replica
sets as well as the pods. And we can see
that everything is healthy. So, this
gives you a good indication of what's
happening in your environment post
deployment. So, you don't have to run to
your monitoring or run to other tools to
check it. So now that we've checked our
deployment and it's all good to go, we
can now go ahead and promote this
release to production. Go ahead and
deploy that. So you can see how
convenient and easy it is to perform
deployments and promote releases to
different environments. Now, one other
thing I wanted to show you what Octopus
can also do is perform operational
tasks. Because Octopus has agents in our
infrastructure, we can perform
operational tasks like restarting a web
server, recycling an IIS process,
clearing a CDN cache, any kind of
mundane tasks. And we do this using
what's called runbooks. Runbooks allows
you to automate some of these
operational tasks. So think about the
scenario I mentioned earlier. After a
deployment, a QA person wants to go
ahead and check whether deployment was
successful, whether the actual
infrastructure is still okay. And they
can do that using that Kubernetes live
object status. But we can do a lot with
runbooks. We can do things like describe
a Kubernetes deployment, describe pods,
describe services, get pod logs, get
pods, scale pods up and down. You can
use your imagination here. This allows
you to make non-technical folks even
more productive without giving them
direct access to the production
environments and this is very useful for
platform engineering. So now you can see
why octopus is so great for platform
engineering. Not only does it take care
of deployments but it makes your
platform engineering team as well as
other teams that rely on them way more
productive because you can provide
platform capabilities as a service. Now,
hopefully you enjoyed this video.
Remember that I've left a link in the
description with all the steps I
followed today so you can follow along.
And be sure to check out the link down
below to the fully functional Octopus
free tier, as all the features we've
taken a look at today are in the free
tier forever. So, if you like the video,
be sure to like, subscribe, hit the
bell, and if you want to support the
channel even further, check out the join
button down below to become a YouTube
member. And as always, thanks for
watching and until next time,
peace.
Checkout the FREE TIER ππ½ https://oc.to/free-tier-devops-guy Follow the DevOps roadmapππ½ https://www.instagram.com/marceldempers My DevOps Roadmap ππ½ https://marceldempers.dev Patreon ππ½https://patreon.com/marceldempers Checkout the source code below ππ½ and follow along π€ Also if you want to support the channel further, become a member π https://marceldempers.dev/join Checkout "That DevOps Community" too https://marceldempers.dev/community Source Code π§ -------------------------------------------------------------- https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/automation/cicd/octopus-deploy Like and Subscribe for more :) Follow me on socials! Instagram | https://www.instagram.com/marceldempers X | https://x.com/marceldempers GitHub | https://github.com/marcel-dempers LinkedIn | https://www.linkedin.com/in/marceldempers Music: Track: Fox Beat 2 - Jeff Kalee - Pillow Talk - Royalty Free Vlog Music [BUY=FREE] | is licensed under a Creative Commons Attribution licence (https://creativecommons.org/licenses/by/3.0/) Listen: https://soundcloud.com/foxbeatmusic2/jeff-kalee-pillow-talk-royalty-free-vlog-music-buyfree Track: Reckoner - lofi hip hop chill beats for study~game~sleep | is licensed under a Creative Commons Attribution licence (https://creativecommons.org/licenses/by/3.0/) Listen: https://soundcloud.com/reckonero/reckoner-lofi-hip-hop-chill-beats-for-studygamesleep Track: souKo - souKo - Parallel | is licensed under a Creative Commons Attribution licence (https://creativecommons.org/licenses/by/3.0/) Listen: https://soundcloud.com/soukomusic/parallel Timestamps: 00:00 Intro to CICD 02:06 What is Octopus 03:08 Getting started 04:38 Installation 07:42 The Dashboard 08:49 Infrastructure & Environments 09:24 Deployment Targets 10:39 Tags 12:35 Install to Kubernetes Agent 14:50 Machine Policies 15:07 Machine Proxies 16:02 Projects 17:19 Script Modules 21:12 Variables 23:16 Step Templates 25:41 Platform Hub 26:18 Build a Deployment Process 29:13 Kubernetes deployment step 33:33 Releases 35:22 Perform a deployment 36:03 Kubernetes Live Status 37:13 Automation Runbooks 38:20 Outtro