Loading video player...
The open telemetry operator for
Kubernetes helps us automate the
creation of open telemetry collectors
and manage them in a Kubernetes cluster.
The open telemetry operator also allows
us to automatically inject the zero code
instrumentation into pods for different
programming languages. This allows us to
instrument things like metrics, logs,
and traces from containers running in
our cluster with zero code. For this to
work, we need a few Helm charts. So we
install the open telemetry operator
chart. We also deploy theertert manager
chart which is a dependency. To do that
we say helm installert manager followed
by helm install open telemetry operator.
Then we create a collector. Now the
collector is something that's introduced
with the open telemetry operator. So you
say kind open telemetry collector. You
give it a name and you can do things
like set memory limits and requests, set
node selectors and labels. But the
important part is this config section.
The config section allows us to specify
receivers, processes as well as
exporters and then put it all together
using a pipeline service. So here you
can see we enable a OTLP receiver. This
is where our traces and metrics will
come in. We then specify some basic
processes like memory limiter to avoid
our open telemetry collector from
running out of memory and a batch
processor. This allows us to batch
things up before exporting it. Then we
have some exporters. We firstly export
our traces to tempo. This is a tracing
data store running in our cluster and we
define an exporter for Prometheus. So
we're running a Prometheus data store
inside of our cluster. Then we put it
all together using a service pipeline.
We have a traces pipeline and we have a
metrics pipeline. And in both these
pipelines we're using our receivers, our
processes and we tell it what exporters
we want to enable. So the tempo exporter
and the Prometheus remote write
exporter. We simply cubectl apply this
to our cluster and then clients can send
metrics and traces to us. In this
example, I have a kind instrumentation.
I'm using the zero code instrumentation
for open telemetry. The most important
setting is my exporter endpoint here.
I'm pointing to this trace collector in
my cluster that I've just created. And
then there are specific settings fornet.
There are some for go, but other
programming languages are supported as
well. I cubectl apply this into my
cluster. That means all my net and my go
applications will be auto instrumented.
That means in graphana I can hook up
these data sources. So I can see my
Prometheus and my tempo database. Tempo
holding my traces, Prometheus holding my
metrics. I can deep dive into individual
traces. Over here I can also explore my
Prometheus data source and I can see I
have full HTTP visibility. Now, if you
want to see a full video on the open
telemetry operator, check the link down
below.