Loading video player...
Hello everyone, welcome back to my
YouTube channel. So in today video we
are going to discuss how we can set up a
realtime production grade DevSack ops
CI/CD pipeline on Azure DevOps. So you
have seen most of the times your build
is green your deployment is fast but
what happened when your application code
reach to the production environment it
breaks why because you have not followed
a correct devsac ops practices it
doesn't mean you are running a security
tools ideally your devsack ops pipeline
should always break your pipeline as
soon as it detect any vulnerabilities
any bad code before it reach to any
environment. So that's what we are going
to discuss in as part of today video.
So we are going to follow this
particular architecture where I'm going
to explain you each of the stages. So
you can see the very first stage is
static analysis. Then we have dependency
scan. Third we have a test and
coverages. Then we have a container
build entry scan push to Asia. Then
deployment stages. So all the stages we
are going to cover one by one and we are
going to see live hands-on demonstration
as well.
So actually I have already set up a
pipeline. So I'm going to explain you
what each stasis is doing.
So if you can see the very first stage
is static analysis. Second is dependency
scan scan. Uh third is unit test and
coverage.
Fourth we have container build and scan
your docker image. Then we have push
your docker image to azure container
registries. So before
first we are scanning and uh any
vulnerabilities any kind of
vulnerabilities on your docker image.
After that we are pushing that image to
azure container hs. Then we are
basically uh validating or we are
basically scanning our infrastructure
code not infrastructure code uh it's
sorry your kubernetes manifest files
your ML files it could be ML files your
helum charts whatever it is and you can
also scan your terraform code but
actually I'm not using a terraform here
and then we have a development
environment QA environment and
production environments so let's go and
understand each stages.
So let me first edit this pipeline.
So one thing uh I'm going I'm going to
use a Python application as part of this
demonstration. So your static analysis
tool your dependency scanning tool
everything will be a change based on the
language like for NodeJS it could be
different for Python it could be
different. So this video is specific for
Python project. So accordingly I'm using
those tools.
So if I will go to my very first stage
which is ST. So what does it mean?
Basically it is mean static analysis
uh sorry static application security
testing. So basically on this stage I'm
going to scan my application source
code. So if I have any vulnerabilities
or any bad code in my application source
code my pipeline is going to f fail
initially itself.
So let me explain uh what all the task
we have as part of this particular
stage. So for security uh analysis sorry
static application or I would say static
analysis we are going to use here banded
as a static analysis tool. This is open
source and uh it is uh good if it is
good you can use it in real time as well
in like in your real environment as
well. So we are using here a bandaid as
a static analysis tool. So if you can
see the very first task task here is we
are basically uh setting up a python
environment here with a version which is
3.11.
Then we are installing pip because pip
is required. If we in if we need to
install any packages we use a pip
command. Okay. And then uh we are going
to create a directory here reports
because I want to display my uh whatever
the vulnerabilities or whatever the bad
quotes we will find as part of our this
particular stage. we want to publish as
a result in our artifact and also we
want to display in a summary uh in a
pipeline summary basically.
Okay. So here if you can see
this is the command which we are going
to use bandit- run and you can see here
app directory because in my python code
let me open my repos here I have only
single folder where I have application
source code if you have you can define a
root directory as well but for for me
it's a very simple application which
have only one file which is main.py pi.
So this file basically I'm going to uh
scan
that's why I have defined here app
directory here
and uh this is the output will be in a
JSON format
and here I'm using doublepipe equal to
true because I want to publish this
output on my pipeline summary and in
artifact as well. If I will remove this
part from here double pipe through then
it will not proceed further and it will
uh it will not publish the report on my
uh pipeline summary even artifact as
well. So that's what I'm going to use
here double pipe equal double pipe space
true. So my pipeline will continue it
will fail but it will continue to
publish the result.
Then we are going to use here this
particular step. So here what we are
going to do. So basically we are this is
the step where we are publishing our uh
pi our report in a summary tab or we can
say in a pipeline summary we are
publishing the result here.
So this is this is the best script
basically which I'm going to use and uh
this is the basically azure devops task
built-in task which I'm going to use
and here we have a enforce build quality
gate. So if we will find any
uh I would say high severity
code or high severity issues in my
application source code then my pipeline
is going to fail. If it will not find
any high severity issues then my
pipeline will not fail because if you
can see here I have defined here dot
issue severity equal equal to high. If
there is no high severity in my
application source code then it will
continue. If it will find any high
severity issue my pipeline is going to
fail
and this is the uh artifact which we are
going to publish. So basically this
report raw report basically we are going
to publish as a artifact as well. So
let's do one thing uh
let's run this first stage so it will be
more clear if it is failing or not.
So what I will do? Let me run my
application.
Okay, sorry I have uh let me cancel it
because I have not selected the
particular stage. It is going to run
entire stages.
Okay. So it is canled now. Let me go
back and let me minimize a little bit
this
uh run pipeline. Let me select my
stages. Let me select my first stage
which is which is uh static analysis.
Let me use and run it.
So this time my pipeline or my pipeline
is going to succeed because I don't have
any high severities issues in my
application code. But we have one medium
severity issues in my application source
code
uh that we will uh modify our uh script
and it will detect that medium severity
issues and we will see my pipeline will
fail.
So let's wait uh to complete all these
steps.
Okay.
So if I will go to your extensions, you
can see static analysis security static
application security testing issues
which is bended from two issues. One is
low and one is medium because I have
typical
uh module in my application source code
which has uh CV uh medium CVTs. But if
you see uh if I will edit my pipeline
again and
uh if I will go to build gate you can
see I only specify high. If I will
specify here medium then definitely my
pipeline is going to fail. So if I will
give a space let me put it here and let
me validate
and save. And now let me run my pipeline
again. So it will give you a clarity
okay uh whatever the security sorry
quality gates we have implemented those
those are working or not
and if I will go to my python sorry
main.py py you can see I have a pickle
module here and if I will come down here
I have this that's what this particular
module has uh vulnerabilities medium
level
you can see it is security gate failed
uh
security gate failed one high severity
issue found because my application
already has one medium severities and we
have specified
uh sorry not here if I will go to my
pipeline
we have changed our script now it will
look for medium and high both severities
and we already have a medium one so it
printed uh the message high severity
issue found so we can change the message
as well based on our requirement
and also if I will go to My uh again
here extension tab you can see this
particular message. Now now this earlier
also we got a same message but our
pipeline doesn't fail but this time it
failed because we have modified our
script.
So this is the very first stage like uh
before building your application first
you have to check any bad code any
vulnerabilities in your application
source code itself because most of the
times we use third parties uh soft third
party code or sometime third party
libraries and maybe those libraries or
those those partic those particular code
block has any uh severity like any
vulnerabilities. So we have to detect
first thing. Okay. Now let me do one
thing.
Uh let me edit my pipeline again.
So this was the first stage. And if you
can see uh this is always good. If you
uh publish this kind of output by
default you will not get it. For that
you have to use uh this particular part
markdown format because here and one
more thing I forget to mention here uh
this we have to convert we are using
here you can see we are using here one
script python script basically to
convert our banded JSON output to format
and is basically uh
okay so is a static analysis result
interchange format. It is very vague
name. I always forget this name. So in
this format we have to use and most like
if you use a GitHub or Azure DevOps you
will always display your summary your
results in this particular format. So
for that we have a script which we are
using. If I will go to script folder
here bend it to surf. So on this format
we are converting
because we want to visualize or we want
to publish in our pipeline summary. So
for that we need to convert banded JSON
format to surf format. Okay.
So now let me go back to my pipeline
again and let me edit it.
So this was the first stage
and you can see uh this particular part
I'm doing uh repeating for every stages
but if you want you can create uh
multiple jobs on a single stage itself
and you can publish all the summaries in
single pipeline. So I already have that
pipeline as well. Uh let me show you. So
if I'll go to pipeline.mml file if you
can see on a single stage also you can
do it but it is always good if you keep
a separate stages uh it will take it
will basically increase the execution
time but you will have a clear
visibilities. If you want a fast
deployment then definitely you can merge
all these three stages into single
stages and you can publish the summary
like this. I will share this application
source code on uh GitHub and I will uh
mention the GitHub location in my uh
description box as well. Okay. So now
let's move to a second stage uh where it
is.
Okay. I let's verify this publish
artifact as well.
So
okay so it's not published it's fine.
Second stage is dependency scan. So here
what we are going to do. So basically
for Python application we generally use
a third-party packages like uh which we
mentioned in our requirement txt file.
So on this stage we are going to detect
all the packages which we have mentioned
in our dependencies file sorry in
requirement.txt file. In case if those
packages has any vulnerabilities then
our pipeline is going to detect that
those issues.
That's what we are going to perform as
part of dependency scan st. So
for that we are going to use pip audit
command.
So basically a pip audit al is also a
security tool from the python ecosystem
as well which basically detect any
vulnerabilities in your pack uh
libraries or packages.
So you can see we are going to scan our
requirement.txt txt file here
and we are going to generate and publish
the pipeline summary file
and uh one thing uh if you see here this
jq so this is called uh JSON query this
is basically used to uh process transfer
or convert your basically your JSON
format
so here also you you will see uh
dependency Security policies fail on
high, critical and unknown. We have this
one. So here we have if we will find any
high or critical or unknown then our
pipeline is going to fail.
So now let's do one thing. Let's uh run
this particular stage as well.
So let me select my okay before this let
me remove this medium part otherwise my
pipeliner will fail. So from here I will
remove this medium severity.
So if you're working on any a either it
is aure devops or genkins or any CI
tools you should always understand how
cell scripting work because you can use
a built-in task but that task itself is
not sufficient for you like in our case
we need to publish all the reports uh to
to basically visualize in a dashboard
itself on a UI basically. So everywhere
we are using a style scripting here.
Okay. So this is completed. You can see
that it published the artifact. Now if I
will go here, we have a code analysis
log. These and dependencies can result
as well. So if I will go to my extension
here.
Let me reload it.
So no vulnerability vulnerable
dependencies found. So what we will do
to verify this particular stage?
Let me go to my repose and
requirement.txt file. Let me unccomment
these two.
And now let me run the pipeline again.
So you can see in first stage we have uh
analyze our application source code. In
second stage we are analyzing all the
libraries. third party libraries or
packages which we have used for our
application.
So that's how you have to implement a
dev sack of practices from the beginning
itself.
Okay. So you can see uh the pipeline is
failed because it found a
vulnerabilities.
So
explore input must be a screen. It's
fine. If I will go to extension here,
you can see dependency vulnerabilities
found. And these are the two packages
where it found a vulnerabilities and
severity type is unknown. That's why my
pipeline is failed.
So we have verified before
unccommenting those two packages. Our
pipeline works successfully. As soon as
I have uncomment those packages which
has the vulnerabilities, my pipeline
failed. So now let me
go to repose again and let me comment it
out. These two I have added this just to
show you uh to verify whatever the
pipeline we have it's working or not.
Okay. Now let's move to our third stage.
when we run third time uh next stage
basically then it it is going to work.
So let me minimize our this stage and
dependency stage.
Okay. Now the third stage is test and
coverage. So for that we are using pi
test unit testing tool and the second is
piest hyphen co code coverage tool which
is piest cob. Basically the first pyest
is going to validate your unit test
cases for your application like how your
application is going to behave and piest
hyphen code which is code coverage tool
which is going to measure how much of
your source code is covered by your test
cases which you have written.
So for that uh this is the st which is
the name is test display name is this is
depend on second stage. So once this
stage is completed then only this stage
will be triggered
and again you can see here Python we are
going to install a pip packages
and we are going to install all the
dependencies which is required. So we
need uh two dependencies here which is
in first in requirement.txt txt file and
second we need a piest hyphen
uh code coverage
because uh this particular task is going
to run on your ashure hosted virtual
machine and you know each stage will
spin up a new virtual machine. So that's
why uh if you have a self-hosted machine
then definitely you should have all
these uh requirements or the
dependencies pre-installed but this is
just a demonstration and just I just
want to show you how each stasis works.
That's why uh we have created multiple
stasis
and again we here we are using a
what is called a bestell script
basically
and we are going to publish this code
coverage under test tab which will
appear in our pipeline.
So now
let me run. Okay, one important thing
here.
Okay. So, we have basically defined if
our 80%
coverage
if our code is below 80%
uh then our pipeline is going to fail.
If it's 80 or more than 80, it is going
to succeed it. And we are going to
publish our test result as well.
So let's execute this particular stage
as well.
So this is a simple command which we are
going to use here.
You can see uh in previous stages we
have multiple stages.
So now let me run this stage. Let me
select 1 2 3
If you need uh more clarity like how
each stages work. So I have created a
document folder here for each stages. So
if you want to look for the test and
cover stages what all the task I'm going
to follow you can follow these documents
as well. So definitely it is going to
help you to understand the script
whatever the script whatever the
commands we are going to use as part of
each stages. So for S stage we have this
documentation
and each line it will this documents
basically explain you each line what it
is uh doing.
Let me go back to my pipeline. So this
is completed earlier it is failed. So
now it should uh succeed because we have
commented out those two vulnerabilities.
You can see it is completed. Now now
let's verify the third stage of third
stage which is unit test and coverages.
So if you can see in all stages we are
installing a dependencies basically. So
what you can do you can merge all these
three stage into a single stage and you
can uh run all these task like basically
you should not install every time those
dependencies and all. So you can see
this stage is also completed
successfully. Now let's verify.
So if I will go to not here, it is going
to publish as part of test tab. So total
test here we have three and percentage
is 100% pass. Three pass because we have
only three test cases. And if you need
to look those test cases, you will go to
repose under test and we have a file
called test main py.
So these three test cases we have here.
Okay. Now let's move to our fourth st.
Let me minimize this st.
Okay. So another fourth stage we have a
container build and local screen. So why
we are using this? So before pushing our
container or sorry docker image or
application image to Azure container
registries, we want to build it locally
and scan it locally because our Azure
container registry is a global
registries. If our vulnerable image will
reach there maybe some other projects
who were using those particular uh
container hs might we pull that
particular image. So it is always a good
practice before you push your image to a
container registries you first scan it.
If it's found vulnerabilities
uh you will correct it
and once you are 100% sure okay my image
is 100%
perfect and it doesn't have any
vulnerabilities then you can push your
container image sorry your app docker
image to a container registry. So that's
what we are going to perform as part of
this this particular stage.
So here basically we are using three
things. So we are using docker to build
our container container image. For uh
vulnerability check we are going to use
and to pass this
uh vulnerability results to our pipeline
we are using jQuery JSON query jq.
So the very first thing we are
publishing sorry uh we are building our
docker image and you can see uh one
thing here which is very important add
pipeline data false. Why? Because we
don't want to include any
CI related metadata in our docker image.
We want to keep it clean. That's why we
have mentioned here add pipeline data
false.
And here we are going to install a
tribby as a container to scan our image.
And we want to output a JSON remote
sorry a JSON report which we will get
into a summary steps like previous
statuses. So for that we have created
this particular directory which is
reports under pipeline.workspace
workspace
and this is the command which we are
going to use
and uh so we are going to use image
which is triv latest so you can see this
particular this is basically we used to
break the lines if I will put this
command entire command in a single line
it is going to be a very long command
and it will not be a mature that's why
we use these slashes to break the lines
So this part basically we are
installing our trib container and we are
scanning our image locally.
Now here
we are going to check how many
vulnerabilities it is going to found in
the reports. If you can see uh under
this reports
it is going to publish the result and
here basically we are going to uh found
how many vulnerabilities we got it and
we are going to publish 10
vulnerabilities in our uh pipeline
summary basically. So if you can see uh
if vulnerability count is zero then you
will get a message no vulnerabilities
found in your docker image. If it founds
any vulnerabilities then it is going to
publish.
So basically we are going to pass top 10
vulnerabilities in a summary table and
this is the JSON query which we are
going to use to format our report data
basically and we are going to publish in
our Azure DevOps summary like under
extension basically.
So uh let me run this stage as well.
Definitely this Docker image has uh
vulnerabilities.
So it is going to take uh at least 2
minutes. So what I will do? I will pause
the video. Once we will get reach to
this fourth stage, I'll resume back.
Okay. So the fourth stage is started
now.
You can see we are building our image
locally. Then we are running a TV scan
to detect the vulnerabilities in our
docker image.
Okay. So it's completed successfully but
we have a vulnerabilities in our docker
image.
So if I will go to extension here you
can see container security scan it found
62 total vulnerabilities but it will
give you a list of top 10 because we
have mentioned here we want 10. If I
will count 1 2 3 4 5 6 7 8 9 10 because
this is what we have mentioned in our uh
pipeline basically. So if I will go to
let me edit my pipeline.
Okay. So here if you can see we have
defined an account 10. If you will
increase the count 20 30 like whatever
the you want uh like numbers of
vulnerabilities you want to display you
can uh change the number here as well.
And in case if it is found any high
severity or then it is going to fail.
But if you see here uh
here so final gate fail if any high or
critical issues exit then our pipeline
is going to fail otherwise it is going
to success. So these are the warnings
which we will get like these are the low
low-level vulnerabilities which we have
in our docker image. So that's fine. Now
another stage is very simple. We are
just uh pushing our image to container
registry.
So let me
okay let's do one thing. So this is
nothing to explain here. This is a
simple built-in task which we are going
to use here.
And another stage also we will explain.
Then we will move we will execute our
five six both the stages parallelly.
So another stage is IA security scan. So
here we are going to use check OB is an
basically infrastructure or co
infrastructure security scanning tool
which we are going to use here to scan
our coernetes manifest file. So check
can be used to analyze to scan your
terraform code your ML file your help
charts all infrastructure related things
you can scan using checkbook check.
So the very first thing we are going to
install a check on our build agent
and this is the script which we are
going to use. So because uh we want to
publish
sorry uh not publish here. So basically
we are going to get a report in case if
it is going to found any issues our
pipeline is going to run again because
we are going to here is double pipe true
because we want to publish those issues
in our pipeline summary. So that's why
we don't want to stop our pipeline. we
want to continue to run in case it it
found any uh issues in uh infrastructure
code scan.
So again here we are going to use uh
that markground format because we want
to see the summary and you can see these
type symbols
and here okay this is the important part
here gates. So here we are using ckv_k
ka test_1 and two. So these are the two
gates which we are going to use as part
of our script and the very first which
is ckv_k_1
basically scan your container is running
as a root or not. If your container is
running as a root then it is a high
security risk because uh if somebody or
if unauthorized user or hacker try to
get an access on your application it is
they can uh go inside your container as
well. So we don't want to run our
application as a root user. So that's
why we are checking this thing. And
second is uh which is two number.
Basically it is going to test if we have
any missing any securityities constant
or any privilege issues those things. So
there are many gates which you can uh
use as part of this scan but I have
mentioned only two here and this is the
command which we are going to use and we
are going to use run these two gates
under our K8S folders because under this
folder we have our manifest file. If I
will go to my repository.
So we have here two files deployment.ml
and servers.ml only I have only two
files here.
So those two files, those two manifest
files we are going to scan as part of
this particular ST and we will get a
result
and we also want to publish our uh
manifest file as a RT pipeline artifact
so we can use in our deployment stages.
Now let's do one thing uh let's run
uh first CI part then we will move to a
deployment part. So let me run this
stages.
So now we are running all 1 2 3 4 5 six
stages now.
So I'm going to pause the video once we
will reach to our uh K1 first scale
stage. Uh I will resume back the video.
Okay. So our manifest scan stage is
started now.
So first it is going to install a check
ov on the build is endent. Then it is
going to scan our manifest file. Then it
will publish the summary in a pipeline
summary under extension. Is it
Okay. So this is completed.
Now if I will go to extension.
Okay. So not here basically we will see
uh under test. So here you will see 84
passed. You can see check OB scan stays
here. So you can verify
what all the issues you have in your
manifest file like image tag not latest
or blank. So all these you can fix it in
your uh uh what is called manifest file.
So
default name space should not be used.
So these are the basics checks which we
have to do in our manifest file for a
like for to make our deployment manifest
file production ready. Generally we use
haram charts and all.
Okay. So let now let's do one thing. Uh
let me go to my repos here. Let's verify
the deployment doml file. So if I will
look any issues which we don't have.
Okay. Let's verify this image tag should
be fixed not latest or blank. If I will
go here
you can see I don't have any image tag.
That's why it is complaining that
particular issue. Either it is uh it
should not be blank or latest. It should
be a build. ID but uh this is we are
going to update as part of our
deployment stage. So it's fine. Another
issue let's verify
name space the default name space should
not be used. Let's verify the name space
here. So actually we are not using any
name space. So by default it is going to
take a default name space that's why it
is complaining. If we will look third
readiness props should be configured.
But if you see here manifest file we
have only livveness props. We don't have
readiness props. So ideally we should uh
update this deployment.ml file using
livveness probe as well. That's what it
is complaining. So uh you basically with
the help of check you can scan in your
manifest files and you can fix it.
So this is the deployment uh ml file
which we are going to use.
So in terms of security you can see here
uh run as a non root is true because we
are not running our application pod as a
root user which is uh not recommended.
So ideally you should run your
application pod using nonroot user and
one important thing here is you can see
the privilege. So basically we are
giving a readonly permissions in case if
somebody's hack our application as well
but they can't do anything because it
has only readonly permissions. So all
these things you have to uh consider in
your manifest files and also this
resource utilizations you should not
allocate all the resources like you have
to specify based on your users traffic
how much load you should require for one
or two or three whatever number of ports
you are going to run as part of your uh
Kubernetes cluster and in service.ml
file we have only like we want to expose
Windows
or we want to access our applications
using public IP address. So that's what
we are going to use here. Type load
balancers and on 80 ports we will browse
our applications and this is the our
application port on which our
application will run internally
and let's verify docker file as well. So
here we have a multi-stage docker file.
So here we are using lightweight python
uh base image.
So here basically we are installing all
the dependencies everything and uh if
you see in second stage which is as a
runtime. So here basically we want only
that particular application code which
is required to run our application. All
the unnecessary files we are not moving.
So always we have to keep our uh image
lightweight basically. And you can see
here we are going to create here uh new
user called app users and we are uh
adding this users here. So basically it
will find like in case if somebody's get
a root access as well but actually they
will not because we are not running our
application for using root users. We are
using uh we are running our application
using app users. So we want to give this
permissions also in our application
directories where our application code
will be. So all these things we are
going to use here. So this is the best
practices you have to follow like you
have to always run your application as a
non- node user. Now let's do one thing.
Uh let's run the entire pipeline
or maybe uh instead of running entire
pipeline what we will do I will simply
uh run the deploy dab stage because we
already have uh all we have already
executed these stages. I hope it will
work
because we already post our Docker
image.
Let's see if it is working or not.
Otherwise, we will uh run entire
pipeline again.
Okay. So, Kubernetes manifest file fail
because why? Because when we run this
deployment job, it started a new virtual
machines and it couldn't find the
manifest file. So what we have to do? We
have to run the new pipeline.
Let's let's run for all the stages.
And before this lets me show you
deployment stages as well.
Okay. So basically uh we have three
environments development Q and prod but
we have a single AKS clusters. So what I
did I have created a three name spaces
for D environment is a D name space for
Q is a Q&A name space and pro for prod
is a prod name space. So for each name
spaces we are going we are going to
under assume as a separate environment
because in logical or in real world also
if it's not a very large organizations
they generally use uh name spaces for
logical segration but for productions
definitely they use a separate AKS
clusters but it's fine it's just a
demonstration. So we are assuming we
have three AKS cluster and we have
segregated using uh name spaces.
So the for this I have created a AKS
service connections which you can create
uh from the project settings. I will
show you
not here sorry.
Okay. So I have already created AK
service connection. If you want you can
create a new service connections from
here. Just go to co it is next and
select your subscription and all we
already have. So I'm not going to create
a new one.
And another thing
we are using a deployment strategy as a
run once but if you want you can always
use blue green canary or roll back.
Okay. So
this is the service connection which we
are going to use. This is the name
spaces and this is this is the manifest
file location where we are publishing
our manifest file. So if you can see
here artifact name is manifest file and
target path is K8s. Here we are
basically going to publish our manifest
file. So we can use in these stages and
all these variables I have created under
library. So if I will let me close this
if I will go to library I have a folder
oh sorry a group called dev sec ops
nonpr and here I have all the variables
which I have declared here. So
and another thing
okay uh in environment so for each
environment you can see development
production and testing. So development
it will not ask for any approvals for
production and testing I have set up
approvals here. So once I will approve
then only it will progress the
deployment for these environment test
and prod which is for test it is not
recommended but for prod definitely UAT
and prod you should always have have a
approvals if you want to use like this
is proper assured related but most of
the companies now use uh argo cds
I will uh for argo I have already
created a videos you can definitely find
in my uh YouTube channel.
Okay. Now let's run this pipeline. Let's
see what happened.
So let me pause the video because it is
going to take some time to reach
deployment of dev stage.
Okay. So now our uh deployment stage
started for dev environment.
So here we are publishing a public IP
address as as well to access our
application. You can see this is uh let
me go back to this step and if I will
come down here you can see this is the
public IP address to access our
application. Let me browse it.
Okay. So it's working fine. You can see
this is the application uh simple
application which I have used. Now if I
will go back to my Q test you can see
uh it is asking for approval. So as soon
as I will review the changes I will
approve it. It will start a deployment
on Q stage as well.
So generally for dab stays and Q st we
don't use approvals but for UAT and
production it is always recommended you
should have one to at least approvals
like once they will verify the changes
then only they will approve your
deployment for production environment
and you can also because uh in large
large organization or large project
basically we use a separate repositories
uh for manifest file as well but if you
are follow if you're using githops then
definitely you need a separate
repositories for your uh deploy
deployment manifest file because that
repositories it's going to connect if
you're using argo cd or flux cd any
tools then you are going to connect that
particular your uh githoptops
repositories with those tools So you
need a definitely a separate
repositories for your manifest file.
so it is created this deployment and
service in Q&A space.
Let's uh wait once we will get a public
IP address. We will uh try to browse our
applications. We will get a same result
which we are going uh seeing on dev
environment.
It is taking quite a lot of time. Let's
do one thing. Uh let's connect to our
AKS cluster
and let's see what is happening.
Okay, let me go to connect
Aure CLI
and uh
let me download the secrets.
Okay. So here we can see that for status
is running. Okay. It's completed now.
Okay. So this is the IP address for our
Q environment.
So here we have 124 and here we have
122. So it means both the application is
working fine in both the environment dev
and QA. And similarly for prod once we
will
approve this stage it is going to deploy
the same applications. So I'm not going
to do that. So that's how basically uh
we will set up a real time
depops CI/CD pipeline on Azure DevOps.
If you want to merge one, two, three
stages in a single stage, you can do
that. I already created a pipeline.ml
file for that. So you can create your
pipeline using this file. So it has a
single stage for your for all your
security gates basically.
So I hope it's clear in case if you have
any doubts you can definitely ping me on
comment box or if you are not clear on
any of the part of this particular video
please reach out to me on comments box.
I will try to reply as soon as possible.
And uh one more thing uh I forget to
show you. We forget to show you code
coverage part. Let's verify that one as
well.
So if I will go to code coverage. So
here you can see all the test cases.
Sorry not test cases all the code
coverage which you have in your these
two files. So you can see uh what it's
covered what is not covered. So you can
see 88%
covers like 24 lines covers three
uncovers
total line 87 and line covers 88%.
So this is the code coverage which we
have used along with test unit test.
So test we have already verified. So it
is the why you are showing 93 here
because we have published our
infrastructure code scanning as well on
this particular test tab. If you want
you can publish
under extension as well. So here also
you can because it is showing a message
reviewing kubernetes manifest file but
if you want you can publish. Similarly
we have published for container security
as well. So I hope everything is fine. I
will publish this application source
code along with manifest file in a
GitHub repos and I will mention the repo
link in comment uh in description box.
So that's all for today. See you next
video. Thank you so much.
In this video, we build a REAL, production-ready DevSecOps pipeline exactly the way it’s done in enterprise environments — with security, scalability, performance, and governance in mind. This is NOT a tutorial for beginners. This is for engineers who want to understand how real companies do DevSecOps in production. 🚀 What You’ll Learn in This Video ✅ How a real CI pipeline differs from demo pipelines ✅ How to integrate security tools the right way (not just add them) ✅ Production-grade SAST, SCA, Container & IaC scanning ✅ How to fail pipelines based on severity thresholds ✅ Artifact promotion strategy (CI vs CD separation) ✅ Secure secrets handling (no hardcoding 🚫) ✅ Performance, parallelism & cost optimization ✅ How to design pipelines for scale, audit & compliance GitHub Link - https://github.com/shubhamagrawal17/Tutorial/tree/main/devsecops-azure-project 🧠 Tools & Technologies Used ⚡Azure DevOps (YAML Pipelines) ⚡Docker ⚡Kubernetes ⚡Trivy (Container Security) ⚡Checkov (IaC Security) ⚡Bandit (Python SAST) ⚡Dependency & License Scanning ⚡Artifact Repositories ⚡Secure Variable Groups & Secrets ❌ What This Video Is NOT 🚫 Not a hello-world pipeline 🚫 Not a fake “DevSecOps” demo 🚫 Not theory-only 🚫 Not skipping hard parts This is how pipelines actually look in production. 🎯 Who Should Watch This? 👨💻 DevOps Engineers 🔐 DevSecOps Engineers ☁️ Cloud Engineers 🏢 Enterprise Platform Teams 🎓 Anyone preparing for real interviews or real projects 👍 If You Find Value ✔️ Like the video ✔️ Subscribe for real-world DevOps & DevSecOps content ✔️ Comment if you want CD pipeline, GitOps, or Kubernetes security next #devops #azure #devsecops #kubernetes #technology #learning #production ready devsecop #production #aks #cicd #ci cd pipeline #security