Loading video player...
Heat.
[music]
Heat. [music]
[music]
Heat. Heat.
[music]
[music]
[music]
[music]
Heat. Heat.
[music]
[music]
[music]
>> [music]
[music]
>> Heat. Heat. [music]
[music]
Heat. Heat.
[music]
>> [music]
[music]
[music]
>> Heat. Heat.
>> [music]
[music]
>> Heat. Heat. [music]
[music] Heat. Heat.
[music]
[music]
[music]
Heat. Heat.
[music]
[music]
[music]
Heat.
[music]
[music]
[music]
Heat.
Heat. [music]
[music]
Heat.
[music]
Heat. Heat.
[music]
Heat. Heat. [music]
Heat.
[music]
[music]
Heat.
[music]
Heat. Heat.
[music]
[music]
>> [music]
[music]
>> Okay, we're live. Yet another ask us
anything. me, Victor, Scott in the other
corner. Uh, ask your questions,
otherwise we go home and stop the the
stream. So, start typing questions.
We're going to answer them if we know.
If we don't, we're going to say that we
have no idea and move to the next one.
First come, first served. Uh, now while
waiting for questions uh, until people
start typing something. Scott, what's
what's your on your mind? Are you
impressed with Mo? Is that is that the
>> Yeah, that's the new thing that I'm
looking at. No, [laughter] been uh been
trying to catch up with all the
announcements from uh reinvent this week
at uh you know in Vegas. I will say
beyond whether the announcements are
good or not, I will say it is quite
impressive that usually within like a
half hour of the session, the recording
is up on YouTube. So there are thousands
of recordings already which is quite
amazing.
>> Yeah. And you watch them all?
>> Uh, I watch a lot of them on 2x speed or
listen to them 2x speed and if anything
interests me, I bookmark it to actually
listen to later at normal speed.
>> [snorts]
>> You know, to me the most interesting
announcements are uh those that uh
result in in sponsors of AWS reinvent
going out of business,
you know, [laughter] because there is
there is always kind of like every
everybody's waits for for the keynote
kind of like, okay, what will AWS
introduce and is am I out of business
now?
>> Exactly.
>> That's [snorts]
>> Yeah.
>> Uh okay, we got a question.
>> So, let's see. uh this more like a
statement than a question. I'm working
on a plane uh ID policies assistant
where it will compare policies between
environments
not a question statement. So very cool.
>> Ah here's the question.
>> Okay, I have a question. What is the
best way to compare two policies using
AI to give the best suggestion for
developers?
Um like two policies to give best
suggestion. Do you understand the
question, Scott? I
>> if the idea is like you have a Chaivero
policy that differs between let's say
dev and prod, I would say a it probably
shouldn't. Just one should be in block
and the other should be in audit, right?
Like I wouldn't try to have different
policies. I think that having the same
policies in dev is better. Um simply
because you know just don't block on
everything necessarily in that. But what
would be what would be the to give the
best suggestion to developers in that
case?
>> That's the thing that confuses me.
>> Um maybe like what would fail in the
next environment like would this fail in
the next environment at which point I
would say you don't use AI you use the
dry run capabilities of the Kyivero CLI
to run against a resource and see like
this is one of those things where I
think we sometimes overthink where AI is
going to be beneficial for us. I think
the best thing to do is to have a
mechanism. Now it may be an MCP but that
MCP in the back end should be calling
the Kyivero CLI against the P should be
pulling in the two files and doing
actual code logic of pull the two files
diff respond back. There should be no
agentic element in the back end. Maybe
an agent calls your tool that is a
script basically but nothing beyond
that. The the logic I tend to use is use
automation for repeatable things and use
AI for surprises, right? Or creative
part. And in the concept of policies,
assuming that we guessed right, right?
If the point is kind of like, hey, first
run policies to see whether they pass.
Second, if they don't fix it, run
policies would be what you Scott
mentioned kind of like just dry run or
or apply to the cluster that already has
those policies. But then if it fails,
you can ask AI kind of how do I fix
this, right? Then you can evolve it
because that's not necessarily
uh not not necessarily every time
something fails a policy the fix is
exactly the same, right? It depends on
the context of what you're doing, the
resources and stuff like that, right? So
>> use AI for non-re repeatable things.
That's that's my gist,
>> right? [snorts]
>> Yeah. I exactly. [clears throat]
>> Okay. From Jo Joseph, I have a quick
one. Do you see any benefits in using
different engine English controllers to
stream files like multi-art uploads? Is
it okay to use same instances for
streams and API calls? I I never tried
streaming or even doing something that
large through engine X. So I have no
idea.
>> Yeah. Uh I mean again your issue is
going to be throughput right in the end
on all of these things which is why for
a lot of things like that people are
using node port service type load
balancer ALB which is external and gives
you a different load balancer right
there are use cases. It it really isn't
a question of streaming and API calls
together. It's a question of how much
throughput you need to have right
[snorts] and how much throughput that
single ingress is going to use. Now, if
you mention engine X ingress controller,
whether you're talking about ingress
engine X or engine X ingress are a big
difference. If you're using ingress
engine X, get the hell off of it because
in March that's dead. And if you're on
Engine X ingress, I'm sorry.
That one's not dead, but I'm sorry. Um,
so I would say get off of it as well.
Um, and move to gateway API. uh when you
move to gateway API, this is going to be
one of the benefits you have because it
makes it very easy to spin up multiple
gateways and things like that in case
you do need that additional throughput.
Um it makes it very easy to spin up
additional gateways
um and you know dynamically attach them
and all of that. So that would be my
recommendation would be get the hell off
of ingress engine X onto something
that's more performant, something that's
better, something that is more modern.
Okay, let's see next. This is a I guess
more of a statement than a com than a
question. We have a process that in dev
if you want to promote new policies,
updated policies into higher
environments, we have 20 teams that each
team has different plane ID namespace.
I'm not sure whether you want to comment
on that or or just leave it as a
statement Scott.
>> Yeah. [snorts] No, and it's related to
the previous ones of the policies,
right? Um and again so for all of that
that seems like we were saying a lot of
this seems that it's scriptable and if
it's scriptable this is a script. Now
whether you decide that that script is
exposed through an MCP or not that's up
to you and what you want to give your
developer is the interface. I don't
think it needs to be. I think having
that as a preach I think having that as
a GitHub action that runs
>> in every repo and pulls back and pushes
back on the PR things right like before
you're going to make an update or
whatever it is is the right place to do
that. Um but you know either way don't
be using AI for this. Um AI may be the
way they get to the script not what your
logic does. You don't use AI for this
logic. I mean you don't use AI for
anything that is repeatable period like
exactly
>> you can use AI to write that repeatable
thing that's it um if you're using flux
uh and planning to incorporate cross how
would you handle approach fleet
management for 40 plus clusters uh I'm
not sure I get the I can imagine
infinite number of variations of this
do you get it is it kind of crossplay on
different clusters or crossbane managing
stuff in different clusters.
>> So in general my approach on all of
these things is very simple. You want an
actual fleet management tool. Can you
build one with flux plus crossplane to
solve that? Yeah. Do you want to?
Probably not. You probably want a fleet
manager and then to that handles a lot
of these things and then have flux and
crossplane working in tandem with a tool
that is actually built for fleet
management if you have the money. If you
don't have the money then you know you
build the call it I like to call it the
poor man fleet management right cluster
API crossplane flux CD
>> wait wait wait if you have the money
then go to upbound g give them money
[laughter]
say say that mention my name I might get
a raise who knows [laughter]
>> exactly um no so I mean I think that you
actually want a fleet manager um which I
mean I've said multiple times in my
favorite is Spectro Cloud, but it could
be Spectro Rancher, whatever it is. You
want an actual fleet manager here. Uh,
D2Q, which is now NKP, right? Whatever
it is, Giant Swarm, I don't care. You
want a fleet manager that actually
handles this. Um,
I think now hopefully that fleet manager
sports crossplane uh to provision it. If
not, you know, it really comes down to
what you mean. If you put back a comment
on what you mean. Are you talking about
crossplane to provision a cluster? Are
you talking about crossplane to manage
resources within those clusters? We can
give a better answer around this, but
kudos on using crossplane and kudos on
using flux. Both are the right choices
relative to the alternatives. [snorts]
>> Okay, I just got my alarm ringing
because that means because I always
forget to mention we have a sponsor,
Octopus support. Uh I even have a screen
here. There we go.
Octopus is a sponsor of this stream.
go to their enterprise uh support for
Argo, whatever you need for Argo.
They're maintainers. They know what
they're doing. Go there, check them out.
Uh tell them we sent you. Uh and that's
it. I I feel that we repeated uh we
repeated it so many times that uh I'm
I'm getting bored repeating it. Anyways,
they know what they're doing. Uh they
have folks behind the project,
maintainers of the project. uh if you
need help, if you need support, go to
Octopus um and contact them and you will
be happy. They will be happy.
Everybody's going to be happy. Uh and
that that's my the worst possible uh
pitch one can ever make. There we go.
Okay. Uh let's go back to where were we?
I'm planning to implement backstage and
for fixing issues like software
inventory, docs, and governance. any
tips? This is your domain, Scott. Uh if
I if I start, I will I will go down the
right wrong part like hey you don't fix
issues like software inventory docs and
governance u from backstage.
>> You decorate it. You you display the
fixed version through backstage. That's
my gist. [snorts]
>> So there's a few ways of kind of fixing
things. And what I mean by that by kind
of fixing things is that I recommend
strongly looking into tech insights um
and the ability to basically add these
retrievers that allow you to like do
scorecards on things where basically
number one I believe in the idea of
autoingesting in resources not requiring
there to be a catalog infoyl in every
repo right like I built the Kubernetes
ingesttor plugin I built one for VCF
automation there's one for Azure
resources there's one from AWS or
there's about 30 of them from AWS,
right? Like different ways to ingest in
resources into the catalog automatically
from the runtime. That I think is the
best approach. And then to have tags,
for example, in AWS or tags in Azure be
how you map to specific values, let's
say within the catalog entity. Same
thing Kubernetes annotations to map
things into the entity itself. Now, what
I would say is is that what you want to
do is that's going to give you a very
bare bones, very barebones
manifest, right? Because it's not going
to have all of the rich data that you
would want. That is where you build tech
insight scorecards basically that let
teams see both on a component level and
a team level how they are and kind of
gify it. you gamify against another
against all other teams like here your
component is in the lowest 10% of
applications in terms of coverage within
their entity of metadata right you don't
have your docs in the system give a
there's a scaffolder action for example
if you guys use confluence that can take
confluence convert it to markdown and
push it up with mkd docs into the git
repo so that then it can be managed as
doc's code instead of in the slowest
load ing document system that exists on
planet Earth from Atlassian who had to
buy a browser because their products are
so slow. Um, right. I like in the end
there are multiple ways of doing this. I
think gifying it, autoingesting number
one so you have everything in it's bare
bones and then gifying the way of
getting it so people add their metadata
um into the platform. That's the way to
show the benefit. The other thing is,
and this is very important, you have to
figure out what the biggest pain point
is right now in the company that isn't
any of those things. And make it so that
so you have a solution to it in
backstage
that only works if the person is up to a
certain standard of scorecards, right?
Like only if they add this data in do
they get all of a sudden this huge
value. And then what you want to do is
you never want to force someone down a
golden path. What you want to do is make
it worthwhile for them to put in the
effort to get on to the golden path. So
don't just try to solve those issues.
figure out what the actual pain point
for the end user is
and build a solution around that that
them standardizing these things through
a tooling you offer in backstage or
outside will enable right um that would
be my general approach
I can never add anything to what you're
saying about backstage you just know it
much more than me uh okay so uh more
from Joseph, my setup are engineext the
one that is being discontinued uh behind
and B my worries buffering with multiart
upload. I worry it affects my other use
cases API calls and so on and so forth.
I mean I I feel that that's the answer
you already gave Scott. Go for gateway
API uh because it's so easy to to to
split it into multiple how do they call
it? My brain froze.
>> There's multiple gateways then you have
H
>> multiple gateways. Exactly. You can just
basically you create a gateway class and
then you can create multiple gateways
and then when you create your HTTP, TLS,
TCP, gRPC, UDP routes, you attach them
to a gateway. Um
again it is not I I believe there is
zero issue with streaming and API calls
going on the same ingress if the
throughput of that in what you need in
total fits within the boundaries of that
you know specific uh ingress.
>> You mean ingress throughout or the
networking itself?
the networking right I in the end and
the impress itself right there's the
network itself right
>> if the networking does not is not enough
then doesn't matter how many times you
split it it's still the same network
>> well it does it does it does because
each gateway can run on a different node
which means it has a different nick
which means that it has another
networking stack which means that it can
run on its own
>> okay yeah but it's still
It's still coming now. Now, now this is
my ignorance, but it's still kind of
those requests are entering into the
cluster before they even reach the
um the ingress, right?
>> Well, again,
>> and then it's forwarded.
>> Okay. In a typical environment, in a
typical networking, if we don't take if
we take out of the question AWS who
still believe in routable pods and VPC,
CNI and all that crap, right? If you
take a normal environment where it's not
routable pods because then it's a bit
differently right but in a normal
environment Google Azure AWS if you're
not using VPCI with routable pods all of
that onrem whatever it is basically
typically the it will create a service
type load balancer that load balancer is
an external load balancer that has it
its backend a node port on the nodes of
your cluster basically as it backends it
will go to the node port from there it
goes to cube proxy or to silium if it's
not using cube proxy and from there it
goes to the actual pod itself which is
where it processes also the networking
and then it goes back out to cube proxy
to go to the service endpoints chooses
the endpoint and goes to your pod now
>> when I was referring now to the
bottleneck be in all that before it even
reaches right
>> so there's a the external load balancers
throughput when I have two gateways it
goes to two different load balance
balancers because it's two different
IPs. Meaning there again I have separate
uh network traffic. So I've doubled my
network throughput outside of the
cluster. When I get to the cluster
itself, it depends on how stupid the
load balancing algorithms are or how
smart they are of how they choose their
backends that they go to. If they do it
based off of lease connections, if they
do it based off of the load, if they do
it based off of round robin, whatever,
you have a better chance of them going
to two separate nodes. So if it goes to
two separate nodes then again the node
traffic on that internal entrance into
cube proxy is going to be minimal. Cube
proxy is running as a demon set. So
you're not going to have the issue in
that case right it's just per node
whatever it lands on is where it goes
through. Then it goes to the ingress.
The ingress is where the actual majority
of the processing happens cuz that's
where we are doing layer 7 processing
for the ingress rules. That's where
typically is the most CPU cycles. From
that point it goes routed out into cube
proxy and then it just comes down to
what traffic you have in your cluster
right but the ingress is a key element
because that's where layer 7 offloading
is happening typically TLS decryption is
happening there maybe TLS reenryption if
you're doing reenryption to the backends
right all of these things are happening
in the ingress so that's where the most
CPU time is being spent and the most
weight time um yeah
>> okay back to soyo So, so yo Eric there
are multiple ways to approach it by
versus adopt like Armada versus build we
are planning to use flux crossplane as
fundamental platform components I mean
without entering into the fleet itself
those are the right components I mean
you just made a date to to to Scott by
saying flux
and uh crossplay car like yes
>> exactly
>> now there there are many ways I I still
don't understand fully your use case so
because there are still many ways. You
can let crossplane manage resources in
other clusters. You can let flux manage
resources in other clusters. Uh you can
have one cross plane, you have many
cross planes. Uh it's it's it gets a bit
complicated from there, right? But I I I
don't know about you, Scott, but I
cannot answer anything in more detail
without knowing more. It's hard to
answer in more detail except for I can
tell you my general approach is that you
should be having a single you should
have a brain cluster when you reach that
skill of 40 plus clusters you should
have a brain cluster with crossplane and
flux let's say in this case let's say
that you went down the build approach
okay instead of the buy um in that case
I would have a brain cluster which has
flux and crossplane that provision
clusters flux syncs down crossplane
manifest. That crossplane manifest that
creates the cluster is going to install
a cluster, install onto it crossplane
and flux CD and install the base flux
customization git repository whatever it
is that is going to sync down the rest
of things into the cluster. So all that
the main crossplane flux is in charge of
is provisioning the cluster and
provisioning that again that man that
local management plane that manages
everything else. I do not like the
approach of having an external
crossplane flux managing everything in
all the other clusters. You get to
single, you get to bottlenecks, you get
to weird things, and then you have to
shard things and performance and no,
it's have a single pane of glass that
manages the pushing of that general
configuration and then the
reconciliation happens by a local flux,
by a local crossplane, all of that. You
can use the GitHub crossplane provider
locally to provision even if you wanted
a different GitOps repo per cluster to
create the GitOps repo
uh in advance.
>> Be careful there with that last
recommendation of Scott because you can
easily hit the GitHub API limits.
>> Well, I if you do like you Victor and
add files in. If you're just doing
GitHub repos, you're not going to have
that issue.
>> There you go. So create a template repo
and do the clone of the template repo
into a repo wherever it is and then you
don't have that issue. That issue comes
when you had the compositions Victor
with 50 files in them. [laughter]
So you're reconciling.
>> What can I say? When I when I say
something I say it from experience.
Don't do what I did. That's most of the
cases. Yes.
>> Exactly. But creating a repo is fine. Um
I think um right. So create the repo,
create that and I think that's the right
way of approaching it.
>> Okay. So more from Soyo. Eric, how do
you feel uh what do you feel is a good
MCP gateway to access? There are
multiple of MCP servers popping up. So
let me start by saying that the few MCP
gateways I saw did not fit what I feel
are the requirements. So let let me let
me give you a gist of what I feel that I
would expect from API gateway uh sorry
MCP gateway first is of course
authentication
uh and that's already tough because many
MCPs themselves will not give you
uh means to authenticate different users
right
uh you will see I don't know like
Kubernetes MCP and then it can only
accept one cube config and then if you
want to have different uh settings
authentication per user that's going to
fail miserably. I blame both gateways
and MCPS themselves for that. U another
thing that because let me actually stop
for a second or go back. I think that
companies eventually will be running
MCPS remotely instead of hey everybody
runs 50 MCPS locally and stuff like
that. I think that that makes sense.
Now, one thing that I haven't seen, I'm
not saying it doesn't exist because I
haven't used all of them, is that I
would also like them to be able to
select which tools will be accessible,
right? Because the fact that you want to
among others, let's say Kubernetes,
let's say Git MCP, right? and that
exposes 50 different tools that are
going to kill your context and you just
need five uh some kind of selection so
that you that gateway does not expose
MCP as is but actually limits the tools
to to what you company whomever thinks
that you should be using would be
probably my second requirement right
outside of authentication.
Um, by the way, your uh your terrible
CLI tool that costs more money than you
know a mansion uh to run their AI
models, Claude Code, um they don't allow
you to select tools, do they from an MC?
>> No, they don't. They don't.
>> Yeah, exactly. Right. What like
Microsoft and VS Code have enabled for a
long time, you know, you guys can't
enable through because it's a CLI and
very incapable.
>> No, I don't think that the reason is
because it's CLI. It's because it's
entropic. [laughter]
>> I still That's a hill I'm going to die
on that right now Cloud Code is the best
client combined with entropic agents. Uh
that's a hill I'm going to die on with
Open Code being the second [snorts]
>> Open Code is getting so good right now.
It's insane.
>> Yes,
>> it is so good.
>> Uh
okay, next one from Eden or Eden R. What
do you think about Flux Uncontained? I
haven't used it. I don't know. Have you
used it, Scott? I Googled it quickly uh
before I when I saw the question, but I
still don't understand from the pitch on
the website what it is. So, I I don't
know. I haven't used it. And I see that
>> I don't know.
>> You know, whenever Scott start stops
paying attention, that means that he's
Googling. He's going to uh make a
educated guess. Uh
>> I I really don't know what it is and I
don't get it by the 30 second looking at
it. So yeah, don't know.
>> Yeah. So but I I feel that that's kind
of that's the feedback to the project.
If if you cannot figure out just by
glancing at it,
>> right,
>> whether you want to check it more,
>> like the project may be amazing, they
need to fix their docs because I I
should be able to look at the main page
and it should not be an AI generated
picture of a computer with, you know, a
Kubernetes wheel like somehow coming out
of it like yeah, makes no sense. Um
>> anyway, I'll do my best to check it, but
uh I have no idea. Okay, next one from
Chris. Okay. Does it make sense to
deploy Celium V Argo CD or would you
deploy it before? I would deploy it. I
would set it up during creation of the
cluster.
>> Well, yeah, set up before. There are
ways how to change the cluster to use
selium, but that's such a nightmare and
so much pain that I don't want to do it.
I would rather
create a new cluster with selium than
upgrade existing cluster with selium and
then transition stuff to that cluster.
Then
upgrading to selium is just a nightmare.
I don't want to do it ever again.
>> Ever. Changing CNIs is always a
nightmare. Be it Celium or be it
anything else. And selium is even that
much worse because you're moving off of
cube proxy and then you're changing
everything, right? Which makes it even
more of a complex thing than other CNIs.
Okay. From Luther is standardizing the
components needed to build applications
APIs a privacy for building an efficient
engineering platform. What uh what to
there's always a new variation exception
always coming up. Um
I mean I'm not going to say that it's
needed. You can do it in many different
ways. Uh but I will stand behind the
claim that if you want to build a
platform doesn't matter whether it's
internal, external, whatever you need to
expose your stuff through APIs if you
want to do it right. Whichever techn I'm
not even going to into technology kind
of. Yes. If it's a platform doesn't
matter public like AWS private like
internal developer platform or anything
else APIs
are should be and they're not a
prerequisite but in this century they
should be yes
>> yes and I would say you do need to
standardize components to build
applications in order to have an
efficient engineering platform if you
don't have standardization of components
you don't have a platform because if you
don't have standard ization of your
components. Now, that doesn't mean that
you can't have multiple standards.
Okay? You can have a way to build a
container through build packs and a way
to build a container USING DOCKER FILES.
THAT'S FINE. You have two standard ways
of deploying something. A standard does
not necessarily mean that there's only
one, but you need to create some level
of boundaries so that you can actually
offer a service because the idea of a
platform is you're offering some level
of a service to someone. And in order to
do that, you have to standardize. You
can't offer everything.
Here's a what I feel is the mis the big
one of the biggest mistakes people make
when trying to do some kind of internal
platform something is that they're
trying to figure out how to do it or
standardize or whatever you want to call
it for 100% of use cases and that means
that you're going to make because of the
20% of outliers you're going to make
everybody miserable kind of there is
nothing necessarily wrong that when you
have exceptions or some cases that are
only for one team. You don't have to use
the platform, right? Uh
it shouldn't be a requirement kind of
you figure out what are the things that
will help majority of people this
whatever that is that becomes your
platform and it's like similar like kind
of hey uh you're using AWS it's a
platform right? uh if it doesn't cover
your 100% of the cases but 80 that's
still fine kind of like that's that's
great you can do 80% without their
services right or somewhere else
perfectly fine and then once you once
you capture that that kind of I'm doing
20% of the work for 80% of the people
instead of vice versa then
standardization becomes even less of a
no-brainer in a way right [snorts]
>> exactly and once you've covered that 80%
%. You can create more generic uh
offerings that also cover the other 20%
or 10% of that 20% all that. They're the
last ones though. The first thing you
deal with is how do you build simple
services for those 80%. You don't build
the same service to cover the other 20%.
If you want to create something that
also solves that 20% later, you can, but
you don't put the 80% at the expense of
that 20%. That's the key.
>> And combine, right? kind of let's say
that you standard as I say everybody
uses OCI images right and everybody uses
GitHub actions let's let's say something
right now somebody comes up with Vasom
and that's a single person or single
team you say I'm not going to
standardize your specific use case uh
but you can still use GitHub actions
right kind of like uh you just replace
the buildsh in that GitHub actions with
something else it's fine
You can all those 20% can still benefit
just not necessarily have a blackbox
solution
>> right exactly
>> okay um is it worth to use crossman to
provision resources in the other name
infras living on prem I know it's not
that friendly with onrem despite of
having some providers but still not sure
so
>> it's very friendly on prem
>> I think that there are there are two
things here right one is crossplane
providers that allow you to interact
with something like AWS, Azure, what's
not right and that's where you might hit
some walls with crossplane that there is
no provider for something on prem that
you're using right and uh you can build
your own but let's exclude that for a
second let's say that you don't want to
build anything yourself right now the
other and in my opinion more important
aspect of crossman is creating uh XR
what we call XRDS or custom resource
definitions and controllers that will
generate those API endpoints that we
mentioned before and uh uh figure out
what to do when somebody creates
whatever right oh do this do that in
onrem or wherever right that's valid no
matter what right so there is nothing
wrong with you saying okay I have this
case that is not covered with the
crossplane providers but there is some
other from some other vendor operator
CR RD whatever uh like cluster API would
be a great example right cluster API
covers
uh situations that are not covered in
any form of way with uh crossplane
providers right you can still build that
XRD that composition right and say okay
so I will use AWS provider for this and
then I will do something else for
something else it still composes
together no matter whether you're using
crossplane providers or any other
Kubernetes resource. So the question is
really whether you will find Kubernetes
resource to do something. That's the
real question, right? Or you have to
build your own.
>> Exactly. Exactly. And if it's built your
own, crossplane will be the easiest for
you to build your own as long as there's
a Terraform provider for whatever it is
you're trying to target, right? Uh which
unless you're talking about like
Hypervether is, right? There are
Terraform providers for basically
everything on prem today.
>> Yeah. And you know un these days unless
it's something really kind of legacy and
obscure and what's not there should be
an operator or CRD or whatever
controller for for that something. Uh
>> well there aren't there aren't like
Proxmox doesn't have one vSphere doesn't
have one. Nutanix kind of has one for
some of their things, but not really,
right? Like I mean, like again, you need
to build them, but they have Terraform
providers, right? Um, and so use the
Terraform provider to build a crossplane
object provider. Um, and it still is
going to be so much better and give you
so much benefit. [gasps]
>> Okay, now from Luther, this is
specifically for you, Scott, not for me.
Even though we were both at CubeCon,
uh, have you seen any trends or ideas in
CubeCon?
>> Uh,
I saw trends of really good bars in
Atlanta. Um, [laughter]
those are the trends that I saw. No. Um,
so, uh, I will say there were a few
things. Not really. Um, I've been
watching a lot of the videos, um, as
well from CubeCon. I think that, you
know, finally the realization seems to
be steeping in with people that AI is
just a workload.
Um,
>> you're not talking about running it.
>> Yeah, running it, building it,
inference, all of the all of the, you
know, training and all of that. There's
a lot of cool things happening around
that. Um, and that's where like the AI
talks really were. Um, it it was really
ha I was really happy to see that people
were kind of sick about talking about AI
in like every session like it was there
but it was always like as that anecdote
of okay well I guess we need to talk
about AI right and it was like for the
one minute or whatever and then it was
back to the actual technology part
because AI is never the goal. I believe
AI is a tool that we can use to do
things. I think that for last year's
CubeCons it was 100% AI. Kubernetes was
almost not mentioned because it was all
just a I I um and it was horrifying in
my mind. Um I think that it's gotten a
lot better this year at CubeCon. I will
say I have the same rule basically as
Victor. I don't know if you entered any
sessions that weren't yours this year. I
did not enter a single one that wasn't
my sessions because there's only one
track that matters to me which is the
hallway track of actually talking to
people, meeting people, all of that. So,
were there some cool startups that I saw
in the pavilion? Yes. Um,
but it's really about building the
connections with people there that
matters.
>> You know, I I will repeat what I said
quite a few times before. I think that
we enter the phase when
innovation or completely new ideas are
not anymore common. I'm not saying that
they don't exist but they're not common
because this is Kubernetes now entered
the is in the mature stage right? Y
>> similar like I mean this would be
overstatement similar like Linux right
but similar like Linux in a terms that
you don't expect there to come if we go
to some Linux conference some completely
new idea kind of like we never we have a
completely new kernel right kind of like
this is kind of complete game changing I
don't think that I'm not saying that is
not happening but we are way past the
stage where we would go to cube con and
then the first day your head is spinning
kind of like what just happened kind of
like I just heard for the first time of
something called service mesh what the
heck is that
>> right or
we are not in that phase anymore again
I'm not saying it doesn't exist
>> right
>> it's just not
>> right
>> as common as it was before
>> what we're in the phase of now that I
really like though is how do we simplify
operations of these things because Till
now we were in the geeking out on
technology of oh envoy versus harroxy
versus ingress versus engine x and who
has better performance and who has this
and selium versus calico versus antrea
versus this? It's like no no no no no no
none of that matters. How do I make it
easy to implement service mesh? How do I
make it easy to gain these capabilities
of networking? How do I make it easy to
upgrade my clusters? like what we're
seeing over the last two years, EKS auto
mode, AKS automatic. Uh before that you
had GKE autopilot, but it's as bad as it
always was, so don't use it. Um but the
other two are really good, right? Like
we're seeing more and more fleet
management. We're seeing more and more
solutions out there for FinOps that are
actually doing things like in smart ways
of how do we optimize resource requests,
right? I think the only thing that was,
you know, kind of interesting and I had
known about it because it's been GA
since 133, but is moving forwards a lot
is all of the work going on in in place
resource upgrades. Uh there were some
cool talks about that that are I think
again that's not a oh my god thing. That
is something that's okay. How do we
solve operational challenges that exist
in running production workloads at scale
on Kubernetes? Um
and that's where we're at now.
operational efficiency and not filling
in the box of operations that we don't
have a solution for because it was a new
platform.
I feel that part of the reason for that
is that most people not many let's say
many people or companies they're not
anymore
uh trying to figure out which choices to
make right kind of companies are not
competing anymore. Hey will uh how can I
convince you to choose pselium instead
of whatever else kind of like you
already made that choice very likely
significant percentage kind of like now
it's kind of how do we make it work
right
exactly and be it things like right I
Amazon just announced their EKS
capabilities right managed argo CD which
you know I hate because I'm a flux
person and Amazon abandoned flux and it
was part of their product and then they
went with for some reason with the
sponsor of this video not uh the core
technology of the sponsor of this video
uh right of Argo CD right but you know
in the end that is going to make
companies at some level decide okay I'm
going with Argo CD because that's what I
get for my distribution right what's
nice is that EKS used to be EKS EBSCSI
maybe IRSA at some level or another
everything was AWS And now all of a
sudden they have add-ons for assert
manager. They have add-ons for external
DNS, they have add-ons for Argo CD, they
have add-ons for Crow, they have so like
in the end same thing Azure has, same
thing Tanzoo has, same thing Spectro
cloud has. So decisions are being made
for us simply by the fact that most
people are using a distribution.
>> Yeah. But it's also for the those
creating distributions like AWS.
Now it is infinitely easier
and less time consuming to do those
things than let's say 5 6 7 years ago
right because I'm not entering now into
I don't want to enter into discussion
this two versus that right but kind of
you can say as a as AWS for example okay
what are the most commonly used tools
>> right
>> over in production for this type of
problem uh Argo CD right cool
>> great we going to use Argo CD. I'm not
saying it's better or worse. I'm not
this is not the discussion this right
but kind of okay. So what is the
networking? There are two choices we can
choose from not 50 right or three.
>> Cool. Let's put that as part of our box
that we are giving people. Right now you
can make that decision precisely because
industry mostly made already choices not
for everything but for significant part
of the components kind of like we
>> uh okay Scott is there a public repo for
your uh backstage crossplane blah blah
blah workshop
>> uh yeah so for the there's a few
different repos there's the repo of in
the back minus stack or there's the
CubeCon one which has the entire
workshop which you can spin up in which
is the workshop I did on this at uh with
Courtney Nickerson and Anna Medina and
uh I'm sorry I'm blanking out on the
Argo City guys name
[clears throat and snorts] um
>> oh my brain stopped
>> Christian Christian
>> Christian Hernandez yeah uh exactly um
right so um you know there's that repo
it's under the back minus stack Mac or
in uh GitHub uh CubeCon NA2025
and then there's also the backstage
plugins themselves which are
github.com/teras
sky minus ossbackstage
plugins. Just write backstage plugins
teras sky in Google and it'll you'll
find it right away. Um and that's where
all the source code is including a demo
app with all of the things and a link to
a GitHub pages website with all the docs
on the plugins. Um feel free to reach
out on Slack or anything.
>> Okay, thanks from the for the early
response related to backstage. I'm
struggling to map our organizations app
services and environments into data
model. It feels a bit rigid. Any advice
on structuring is it rigid? I never
thought I never thought it's I thought
it's different
maybe rather than rigid.
>> So again it you can call it rigid. It
follows basically a kind of adapted C4
model for you know modeling things. Uh
right the C4 modeling architecture
basically kind of uh it took from that
and then made a few kind of caveats to
it. Um now again you have a component,
you have a resource, you have a system,
you have a domain and then you have
users and groups, right? That's what you
have and you have APIs, right? Um
those are the kinds I tell everyone do
not create custom kinds in backstage. It
is a horrifying idea. Don't do it. Uh
what you can do is have as many types as
you want. There's a spec type field and
you can have a million types of any of a
component of a resource, all of that. Um
the general approach that I have on
these things is quite simple. Um
which is that you should not be adapting
backstage always or any tool. Do not
adapt the tool to your organization. If
you are using an industry standard tool,
adapt your organization to the tool
because if you start to make tools
work for your organization and not need
to make any changes in the end, you are
going to veer away from the product. And
there's, you know, um, I'm forgetting
the name of that law. Uh there's like
one of those like big philosophical
things of if you create an API
and you publicize it out there and there
was the way that you thought that it
should be used but you didn't put in
guard rails and there was a way to do
something someone will have used it that
way, right? Someone will always use
>> in the wrong way [clears throat]
>> and then they're going to break you.
>> That applies to almost all
>> I said any product. I said any product.
adapt yourself to the technology and not
the technology to you because you won't
gain the benefits of the technology.
That doesn't mean that you don't try and
see how does this fit and whatever but
don't use the tool in a unnatural
way for the tool.
>> I I feel that I I would
correct you slightly on adapt yourself
to the tool. You don't necessarily want
to adapt yourself always to the tool. Uh
but if you realize that actually
adapting yourself to the tool, you have
to adapt and that's too much work and
unacceptable. Maybe the tool is wrong,
right? Or maybe you're not ready for the
tool, right? Kind of I can I I know back
in the day we had those cases where
people people could not adapt to
Kubernetes, right? because they they had
web logic and you say okay so you have
two choices kind of like you keep
reusing web logic and just forget about
Kubernetes for now right or
>> you adapt to Kubernetes because you see
the reason in that right but
>> the tool might be wrong for what you're
trying to do
>> Kubernetes
>> don't run don't run your web logic
monolith in Kubernetes though which is
why
>> keep it keep it and say Kubernetes is
not a good choice for me that's good
that's a valid option or you say
Kubernetes is a good choice. We we were
maybe doing it right before, but now
we're not doing it right anymore. For
that choice,
>> right? One of the two.
>> If what Right. If all you want, and I
can't believe I'm about to say this, but
if all you want is a glorified UI around
all of the things that you have in an
organization, and you don't want to make
any changes to your organization, that's
where tools that I don't like. But tools
like port fit in because they have the
most flexible data model in the world
because they just don't care. JSON
schema build whatever data model you
want. Backstage is for teams that want
to standardize on the C4 model which is
the model that has proven itself in many
other organizations. It is harder to
adopt because of that because it
requires you to adopt yourself to the
platform inside of the platform adopting
itself to you. But I think long-term is
the better model. I think it creates
standardization. I think it prevents
leaky abstractions that people end up
building in things like port because the
second you have control over the data
model you're going to create bad
abstractions um inevitably I think in
99% of cases um but if you can't adopt
yourself to the tool backstage isn't the
right tool for you go and choose a tool
like port or whatever
>> and you know there there is another
reason is that let's say that you do
adopt uh you you change the tool to be
how you envisioned it instead of the
authors, you will likely have real
difficulties to adopt future new
features or improvements of that tool,
right? because you're you're starting to
diverge from it because uh backstage
problems Kubernetes and others they will
assume that hey when we develop this new
feature we develop it in a way that it
works for people who de who are already
using it in this way not in any random
way right uh so
yeah
um now this is from med it's supposed to
be alternative to containers that's flux
uh I understood that's basically the
only thing I understood. I just don't
understand how because and this is me
speaking without knowing what it does
and that's that uh whatever whenever I
hear this is similar to Vasom in my
head. Hey there is a better way to do it
than containers alone. I'm all in as
long as my stuff is going to continue
mostly to work meaning the ecosystem.
Right. Uh and I don't know whether
that's the case of flux or not. Flux
>> right? And please name things in a way
that I can this is probably the third
time that I say flux instead of flux.
Please name projects in a way that I
don't get confused.
>> Uh yeah uh I agree again if this is
another way of building in the end a
container image fine. If it's not a
container image and now I need to have
different run times in Kubernetes and
then half of the things that I have
aren't going to work with it I'm against
it. So if this is a better way of
building container images that doesn't
require a docker file that allows me to
do things in a better way blah blah blah
blah blah blah blah blah fine with it.
Wom fails in my mind because of that and
nothing works with it. Um and if this is
going down that same approach I'm very
much against it. If it's like the idea
of build packs which was a change from
for how I build a container from the
docker file approach and it auto
detected and built and awesome huge
supporter of creating efficiencies in
the SDLC not about changing the
technology unless there is a big enough
value from the tool that actually makes
it worthwhile and I have yet to see
anything that comes even close to that
and all I will say is that from how long
this project has been around, if neither
me nor Victor have heard about it and at
CubeCon it wasn't being talked about and
all of that and we were walking around
talking to people about these things all
the time. If I like I don't believe that
it's the next big thing. If there was
zero talk about it that I heard at
CubeCon, um I think that if it was the
next big thing, we probably would have
heard about it. I heard Wom a dozen
times and it's not worth anything. It's
>> I cannot say about flux but uh I feel
that the failure of vasom and I don't
know how much that applies to flux or no
is that if you want to change things
that are commonly used and established
you need to have a very very good reason
it cannot be slightly better right kind
of it needs to be very good enough to
convince the whole industry to move to
it kubernetes did that right just to be
clear I am for a change and eventually
we will replace containers with
something else and eventually we will
replace Kubernetes for something else
but it needs to be
for a reason for a sufficiently big
reason and you need to have a backing of
you need to get to the point that
actually the industry itself backs you
on that right uh it cannot be just you
right because then it's you against
everybody and that does not work
>> uh ask docker how how that works.
[laughter]
[snorts] Uh what are your thoughts on
ephemeral environments? Is deput
something IDP should offer? Uh without
me answering deport specifically uh
ephemeral environments are awesome. uh
not necessarily something you should use
always and what's not, but being able to
spin up an environment
to do something or validate something
that and then destroy that environment
instead of polluting pre-production,
staging or whatever with random stuff uh
is awesome and I'm really big supporter
of
enabling everybody to go at their own
speed. When I hear that, hey, we just
developed this, but we need to wait for
this team to actually finish this in
this environment. So before we can do
that in this environment, I freak out,
right? Kind of. So, and if ephemeral
environments are going to enable you to
enable each developer in their in your
organization to go as fast as that
person can go without waiting for others
to do something. I think it's awesome
and ephemeral environments are part of
the story.
>> Yeah, I think that right ephemeral
environments and remote development
environments are similar but different.
Right. Devpod is not for ephemeral
environments. It's for remote
development environments which may be
ephemeral as well.
>> Ephemeral is typically
>> what?
>> I still ephemeral.
>> It it is right. But I'm saying typically
when people say ephemeral environments,
you're talking about ephemeral like test
environments, things like that that get
spun up from PRs, all that. Again, huge
supporter of that, right? Whether it be
through Fox operator, through Argo app
sets, through whatever mechanism is
doing that for you. Awesome. Huge
supporter of ephemeral testing
environments, remote development
environments. I am a huge supporter of
as well. I used to be a huge supporter
of DevPod. Uh I built backstage plugins
for it. I love it. I like I really love
DevPod. The last release was in March.
Um I have a major issue with that
because Loft have moved all of their
focus to VCluster. Um it seems they have
completely basically abandoned it. Um
and that is sad in my mind because DevP
pod really showed a lot of potential. I
loved it as a tool,
>> but it's also it's sad but it's reality
kind of like loft loft spin up multiple
projects trying to see what is the
market fit for their business because
they need to have some business and they
they figured out we cluster is is the
thing we can build on.
>> Exactly. And you know what I I don't
blame them for that or anything. I find
it very unfortunate because I personally
love DevPod. Um, it's just, you know, I
can't really fullheartedly recommend it
to anyone today because it's just like
very not maintained, right? Um, which
makes it, you know, sad in my mind. Um,
my number one recommendation in that
space right now is GitHub code spaces if
you can. Uh, GitHub code space I think
is awesome. Um, and it uses the same dev
container that you use with devpod,
right? So, you know, it's awesome.
>> In other words, you're a you really like
dev containers with with whichever
wrapper comes on top of it.
>> Uh I like standards. I don't care. I
don't necessarily think that dev
containers is the best standard in the
world. I wish it had ways of spinning
stuff outside of the dev container. I
wish there like there's a bunch of stuff
that I wish that was better, okay, about
dev containers. However, I always will
prefer a tool that follows a standard
rather than something that doesn't
follow that standard. Now, if we're
talking purely the best technological
piece, Octetto, I believe, is the best
for development environments today, but
it's hard for me to recommend anything
that doesn't follow the standards of
like dev containers, right? So, Octto
Yaml is nice. I love Romero. I love the
team at Octto. They're doing amazing
stuff. I like standards. I like hotel. I
like open metrics. I like standards. Um,
and the same thing goes for dev
containers.
>> Okay, we have three minutes left, so
let's go fast. Can you briefly explain
the difference between Argo City and
Tecton? What would you use one or the
other? Tecton is a workflow tool. That's
how I'm going to call it like GitHub
actions or Argo workflows and stuff like
that. What they do is sequential. I mean
it can be parallel but eventually
sequential execution of certain steps.
Do this, do that, do this, do that, do
this, do that. It could be parallel but
basically you define what should be
executed and when.
Argo CD or Flux or GitHub tools in
general, they are a synchronous pool
based mechanisms that are watching Git
repo comparing something in Git or or
OCI registry and if there is a
difference between what you see there
and what is in a cluster they would
apply the difference right so and you
can define the whole CI/CD with tools
like Tecton without Argo CD Right? Or
you can say I define all that in tecton
like hey I build I run tests I do this I
do that and at one point whichever tool
you're using I'm going to push changes
to this repository and I'm done and then
at some moment in the future arocy or
flux will detect those changes and u
synchronize it or what you would call
deploy it right so you can think of
tecton as a full cicd
workflow
and you can think of Argo CD or Flux as
being one step in that workflow which is
in charge of synchronization or
deployment. Right.
>> Mhm.
>> Even though it's not executed directly
by by Tecton, [snorts]
>> right?
>> Anything to add, Scott?
>> No. When would you use one over the
other? I would always use Tecton. I
would never use Argo CD. Uh I would use
>> Tech. [laughter]
When would you use GitHubs versus
workflows? That's probably the right
question.
>> Uh, I will use GitOps for CD. I will
never use Tecton for CD. I will use
Tecton for CI always over Argo
workflows. Um, but again, Tecton CI CD
is Argo CD or Flux CD or
>> you know, I hate you. Scott, I really
dislike you, man. Because we HAVE
>> BECAUSE NOW entry to discussion what is
CD and because we have no time.
>> Do you want to talk about continuous
delivery or continuous deployment? I'm
talking about delivery which is actually
delivering it to the cluster.
>> Okay. Ask us next week and we can have a
long fight between me and Scott 40 CD.
[snorts]
[laughter]
>> Uh thank you for recommending open code.
Cool. Uh he asked me before Cube before
CubeCon that's why he asked you. Cool.
Uh any recommendations to integrate back
backstage with GitHubs when one service
creation requires several peers across
different repos?
This is your domain Scott. uh again it's
and you know the solutions that I've
built haven't been built that way
because I don't like the approaches
where the definition of a service is
required to be across different repos
because then I don't have a single
source of truth and it's hard for me to
figure out what's going on. Uh I prefer
things to be self-contained units. Uh
but there is no reason that a software
template cannot have two steps to
publish different files and different uh
paths to different git repos. I have
seen that done many times. Uh you can
easily structure something out to do it
that way. Um
>> it's usually a smell of something wrong
as well.
>> Yeah. The second you need to separate
into multiple repos, that typically
means that the structure of your GitOps
repo is not uh
[snorts]
magnificent.
>> Uh Scott, I have a perfect comment now
to close the session.
>> I know. I saw it. It's magnificent.
>> Jenis is better than Argo CD. Yes, yes,
[laughter]
yes. 100%.
Completely agree. [laughter] I'm sorry,
Octopus. Please keep sponsoring us. But
I completely agree. [laughter]
>> Okay, folks. Uh I see that more
questions are coming in. We are out of
time. Come next week, ask earlier and uh
we'll uh we'll talk about it. Thank you
all for coming. See you next week.
Cheers.
In this AMA livestream, Viktor and Scott dive into a wide range of DevOps and platform engineering topics submitted by viewers. The discussion covers practical guidance on Kubernetes networking, including recommendations to migrate from the soon-to-be-discontinued Ingress NGINX to Gateway API for improved flexibility and throughput management. They explore fleet management strategies for organizations running 40+ clusters, suggesting combinations of Flux, Crossplane, and dedicated fleet management tools. The hosts share insights on implementing Backstage as an internal developer portal, emphasizing the importance of auto-ingesting resources, using tech insight scorecards to gamify adoption, and adapting organizations to tools rather than the reverse. They also discuss MCP (Model Context Protocol) gateways, highlighting the need for better authentication and tool selection capabilities. Other topics include ephemeral development environments, the differences between Tekton and Argo CD, Cilium deployment best practices, and using Crossplane for on-premises infrastructure. Throughout the session, Viktor and Scott stress the importance of using AI for non-repeatable tasks while relying on automation for predictable workflows. ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Sponsor: Octopus 🔗 Enterprise Support for Argo: https://octopus.com/support/enterprise-argo-support ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬ ➡ BlueSky: https://vfarcic.bsky.social ➡ LinkedIn: https://www.linkedin.com/in/viktorfarcic/ ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬ 🎤 Podcast: https://www.devopsparadox.com/ 💬 Live streams: https://www.youtube.com/c/DevOpsParadox ▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬ 00:00 Intro (skip to first question) 06:52 Best way to compare policies using AI? 09:44 Different ingress controllers for streaming vs API calls? 12:31 Fleet management approach for 40+ clusters with Flux/Crossplane? 15:00 Tips for implementing Backstage for software inventory and governance? 18:55 Buffering concerns with multipart uploads on NGINX ingress 23:07 Good MCP gateway recommendations for multiple MCP servers? 26:36 Thoughts on Flux Uncontained? 27:47 Deploy Cilium via Argo CD or before cluster creation? 28:46 Is standardizing components a priority for building an IDP? 32:47 Is Crossplane worth using for on-prem infrastructure? 36:20 Any trends or ideas from KubeCon? 43:07 Public repo for Backstage Crossplane workshop? 44:14 Advice on mapping org structure to Backstage data model? 50:08 Best practices for adapting to tools vs adapting tools to you 55:37 Is Flox a viable container alternative? 58:05 Thoughts on ephemeral environments and DevPod for IDP? 1:02:24 Difference between Argo CD and Tekton - when to use each? 1:05:25 Integrating Backstage with GitOps across multiple repos?