Loading video player...
Hi everyone. In this video, we are going
to be doing a complete observability
project on Kubernetes using AI, open
telemetry and honeycomb. So
observability is a hot topic lately.
Every other week, there's a major app
outage caused due to slow request, API
failures or random 500. And that's when
entire internet freaks out on Twitter.
Why? Because modern systems are too
complex to debug manually. If you have
ever debugged a distributed system, you
know the pain. There are too many
dashboards, too many metrics, too many
logs and too little time. And that's
exactly why I am making this video. In
this project, we will be building a full
observability environment using
Kubernetes, Open Telemetry, Honeycomb,
and Honeycomb MCP server. Then I will
show you how you can literally ask your
AI assistant why is my service slow and
it will pull telemetry data from
honeycomb and give it to you. This is
future of debugging. Let's get into it.
All right. So I'm here on my computer
screen and before I type any command,
let me tell you everything that we are
going to do in this observability
project. We start by creating a local
cubernetes cluster using mini cube. Then
on this mini cube cluster, we going to
be deploying open telemetry demo
application. Next, we also need a
honeycomb account. So if you don't have
an account, go create it. The link is in
description. Once we have our honeycomb
account created, we will route all the
telemetry data. So all the metrics,
logs, traces from this open telemetry
application is going to be routed to
honeycomb account. After that we are
going to be integrating the honeycomb
MCP server in VS code. So instead of
going through the logs and dashboards
inside honeycomb account, we can simply
ask the MCP server and for that we need
to first authenticate using OOTH. So you
need to confirm your username and
password of your honeycomb account and
finally we will debug our system using
natural language. So let's get started.
All right. So the first step is to
create a cubernetes cluster in my local
machine using mini cube. Since this open
telemetry demo application has too many
microservices in it for example
accounting service, ad service, card
service, checkout, email, front end and
so many more. We need to make sure that
our cubernetes cluster has enough
resources. So I'm going to say mini cube
start and also provide CPUs equals to
six, memory equals to 1228 and disk size
as 40. Now when I run this command, it
will start creating a cubernetes
cluster. Till the time this is going to
be created, let's go and create our
honeycomb account. So go to
honeycomb.io. Honeycomb is a very
popular modern observability platform
used to debug distributed application
and it is also used by companies like
Dualingo, Intercom, Slack, Vanguard and
lot more. I already have my account. If
you want to create an account, click on
this start for free button or also check
out the link in the video description.
So when you click on this, enter your
details and this is the page you will
see after you have your account created.
Now on this side you can see
environments. Right now I have a single
environment with the name new. I'm going
to be creating a new environment for
this particular demo. And click on this
create environment button. Let's name
this as hotel demo.
If you want you can also describe your
environment. I'm going to leave this
empty. And let's click on create
environment. Now this is the page you
will see when you have an environment
and your account ready. We are going to
be using the API key from here. So let's
keep this tab open and then come back
later on in some time. Now here we have
a mini cube cluster created. You can
also confirm by running mini cube status
command and you should see the same
output as me which is control plane
running host running everything
configured. To run this application we
need to also have cubectl installed. If
you don't have cubectl installed go and
get it. If I do cubectl get pods it will
show me no resources found since this is
a new cluster. Before we go ahead to
make things easier for you and to make
sure that you complete this project, I
have a notion document created with all
the commands and every step to follow to
complete this project with me and also
showcase it on LinkedIn. So if you want
me to share that document with you,
comment below AI observability in the
comment section. Now that we have our
cluster created, we are ready to deploy
this open telem demo application and we
are deploying this on Kubernetes. I'm
going to select this setting here. We
are deploying this application using
helm. So I'm going to run the command
helm repo add open telemetry
this chart. Since we want all this
telemetry data like logs, metric, traces
in honeycomb, we are not going to be
using this command directly. We will
instead use honeycomb configuration. And
if you go to open telemetry demo
application on GitHub, you can see this
demo application can be used by
different observability platforms and
you can see graphana etc. Here we are
going to be using honeycomb. So this is
the repository. If you want to use
honeycomb and pass all the telemetry
data to honeycomb, we need to provide
our own configuration and for that I
have a file already created which is
valid.yamel. You can see this is the
configuration data and here I also need
to pass my API key. I'm using my own API
key and I will show you how things work
here. But if you want this
configuration, it's already present in
the notion document and if you want me
to share that notion document as PDF
comment below AI observability. For now
I'm going to open this project in my VS
code. So this is my VS code and this is
the configuration here. You need to
change your API key to your own API key
from the honeycomb account. So I'm going
to go to honeycom account and then in
the honeycomb account you can see the
API key here. So let's copy this API
key. So now I have my own API key. After
you have this configuration, we are
going to then run the helm install
command to install the open telemetry
application using this configuration. So
the command is helm upgrade install. the
name of the application this is the path
and I'm saying use my file which is
valid.yamel camel let's press enter all
right we don't have a name space
obviously we need to first create a
namespace that in which we are going to
be creating so I'm say cubectl create
ns oel demo so now the namespace is
created let's rerun the command and this
time it will go ahead and install the
open telemetry application if you get
the same output as me where you can see
this
demo which means it's running and this
application is available on the path
localhost at80 it also has a jagger
endpoint point at this particular path.
Jagger is a very popular tool for
tracing. Then you have graphana
dashboards at localhost at80/graphana.
Then there's load generator feature
flags etc. Right now as you can see the
application is still in container
creating state. The pods are still being
created and if you want to access this
application you need to run this command
cubectl namespaceal demo port forward
and then the application will be
available on localhost at80. So if I run
this command right now,
I will get an error saying the pod is
still in pending state. So we need to
wait for 3 to 4 minutes before this pod
comes up. And if you go to your
honeycomb account, right now there's no
data here because the application is not
running. Honeycomb can be used with
NodeJS application with Python
application, Java application, net,
Ruby, etc., etc. We are running this on
Kubernetes and that's why we are using a
valid.yamel file. You can also use
environment variables if you want to get
the telemetry data transported to
honeycomb. For now, let's wait till all
the pod comes up. Okay, finally all the
pods are now running and you can also
see the application is up on localhost
AT80. This is the open telemetry demo
application which is an astronomy shop.
You can go ahead and play with it. Along
with this, you can also access
the Jagger which is a tracing tool on
Jagger/ UI. Since we are going to be
routing all the telemetry data from here
to honeycomb, you can see now honeycomb
also has some data here. So we have
traces coming in, incoming spans and lot
more. Spend some time in this honeycomb
dashboard. Go through all the traces. So
we have different services here. This is
for load generation. You can see traces
here. You can also see logs coming in.
And if you want to query, you can go to
this query tab and check out different
slow request, what are the errors and
lot more. Now for someone who does not
understand how this actually works or
how to check traces, log etc. We can use
honeycomb MCP. Using honeycomb mcp you
can simply ask questions like show me
all the different slow requests or tell
me about where the latency is low rather
than wasting your time through this
dashboard. Think about it you normally
debug by opening up honeycomb then
clicking on queries selecting a data set
filter by service filter by error type
and lot more. So rather than this we can
simply use honeycomb mcp and to use it
click on the account section here go to
my account and here you can see mcp or
model context protocol. I already have
it. I'm going to rework it and show you
so I can show you from scratch. So copy
this server URL and go to your VS code.
You can do this either in VS code or in
cloud cursor anything. So I'm going to
click on this and click on show and run
command. Add server http and paste the
URL that you just copied. Click on
enter. You can give this a name. So I'm
going to name this as honeycomb
and press enter. Let's make it global.
And this is my MCP server configuration.
Once you add your MCP server, it will
ask you to authenticate. So click on
allow. Click on open button here and
this will open up the browser. Now here
I'm going to select my team and then
select permissions. Click on read and
write and then authorize it. This is how
you will add your honeycomb MCP server.
Click on open Visual Studio Code again.
Now I can go ahead and ask questions.
So I'm going to choose this agent option
and you can see tools. In the tool
section if I scroll down I can find
Honeycomb here. These are all the
different tools that Honeycom MCP server
is able to do. So it can create a new
flexible board with fixed grid. It can
submit feedback about honeycomb's MCP
server. You can find relevant columns
across data sets. Find relevant queries,
get data sets and lot more. Let me show
you an example. In this honeycomb
account, you can see we have different
services like load generator, demo,
payment, quote, etc. We also get the
logs and traces for every service here.
If you are an expert, you will obviously
understand what's going on here going
through the graphs. But instead of going
through the graphs and wasting our time,
we can either use the query here. So you
can click on query and then you can also
see slow request, what are my errors,
latency distribution, a lot more. But
here the output is again in the form of
graph. To make things more easy, we can
use the honeycom MCP server. Let me show
you an example. So I'm going to ask list
all the data sets in my honeycomb
project. And now honeycomb mcp server is
working. It will give us the list of all
the data sets present inside honeycomb
server.
So you can see the output here. It gave
us all the data sets present inside
honeycomb project and these are all the
different environments. That's amazing.
Let's try to run a query now. So I'm
going to ask it to run a query to find
10 slowest endpoints in last 30 minutes.
If you were to do this in the honeycomb
dashboard, you have to go through the
query and then find out the 10 slowest
one. But here you can simply ask a
question to your LLM and it will give
you the output right here. This makes
observability so easy. So you can see
the 10 slowest endpoint are all these 10
one here. So we have SL API/checkout
which takes around 83.77 milliseconds
and so on. Can you give me which service
has handled the most request in last
hour?
So it says the service that handled most
request is the front- end service with
about 12,161
request. That's crazy. Let's try with
some more examples. Let's ask which is
the slowest endpoint in last 15 minutes.
So the slowest endpoints in last 15
minutes are API checkout. Again we got
this also if you want to go deeper like
for example let's retrieve the trace for
the single slowest request in last 30
minutes and show me the full trace
details like trace ID spans timeouts
etc. This is amazing. It gave me the
complete information about the trace ID
which is the slowest single endpoint and
it also gave me all this information
like span ID, time stamp. If I were to
do this in the console, it will take me
so much time. But along with this
information, it also gave me a link
where I can go and check this trace
details in honeycomb. Honeycomb MCP
gives you result in natural language
answer along with a link to the
dashboard. So, it makes observability
very very easy. Make sure you try out
honeycomb MCP for yourself. To do that,
you need to create an account and the
link is in description. Also, if you
want a step-by-step document for this
project, comment below AI observability
and I will see you in the next video.
Bye.
Observability project | honeycomb mcp server | aiops | ai ops | devops monitoring project In this DevOps project, we use the Honeycomb MCP Server to debug the OpenTelemetry (OTel) Demo Microservices App running on Minikube. You’ll learn how to set up Honeycomb, deploy the OTel demo using Helm, and use the MCP server inside your IDE to ask real-time observability questions. This is a hands-on AIOps + Observability project perfect for your DevOps resume! Try out honeycomb MCP server: https://fandf.co/49F6sqy What is honeycomb.io? Honeycomb is a modern observability platform designed for debugging complex distributed systems using high-cardinality data, BubbleUp, and fast querying. What is opentelemetry? OpenTelemetry is an open-source observability framework that provides APIs, SDKs, and tools for collecting traces, metrics, and logs from applications and sending them to backends like Honeycomb. Timestamp Intro 0:00 What we do in this devops project 00:58 Minikube setup 01:50 Honeycomb account and environment setup 02:20 Otel App deployment using Helm 03:52 Honeycomb MCP server setup and Demo 07:49 Connect with me - ↳ LinkedIn: https://in.linkedin.com/in/nasiullha-chaudhari ↳ Twitter: https://x.com/Nasi_007 ↳ Instagram: https://www.instagram.com/nasee.remote Cloud Champ youtube channel help Devops engineers learn and master new technologies and also have tutorials on various devops and AI tools like Docker, Kubernetes, Terraform, CICD and more Subscribe to cloudchamp for more Cloud and DevOps tutorials and Projects. Also check out my other videos - MCP explained: https://youtu.be/Xs9AwE2lyHg What is observability? Observability explained : https://youtube.com/shorts/6aTqpoaXjlQ Monitoring and Logging for devops : https://youtu.be/nD6JfA9nGOg What to know before learning Kubernetes : https://youtu.be/bjrlo9LwMJY DevOps Roadmap 2025: https://youtu.be/P77GhVfvuKQ Automate Terraform deployment with Gitlab CICD - https://youtu.be/oqOzM_WBqZc Docker explained with Docker scout and docker init features - https://youtu.be/rqEcheJgquA Master Linux for devops - https://youtu.be/lCq4mYQL0WY DevOps projects for resume - https://youtube.com/playlist?list=PLOa-edppsqFnW0pQrnYf_2rEUI8mgfwuX&si=jhEvLPYg7DdnTRJb Model Context Protocol (MCP) is an open standard driven by Anthropic that provides a uniform interface for large language model applications to connect and interact with external data sources and tools. The Honeycomb MCP Server lets you query your observability data directly from your IDE using natural language. You can ask: • “Which services are slow?” • “Show me errors in the last 5 minutes.” • “Which endpoint has the most latency?” This brings AIOps-style debugging directly to your development workflow. Try out honeycomb MCP server: https://fandf.co/49F6sqy For more DevOps tutorials and projects subscribe to cloudchamp. Join this channel to get access to perks: https://www.youtube.com/channel/UCbg9O0JF3rVKev6wpI5_u5g/join