Loading video player...
And it's also touching on something that
honestly, you know, being somebody who's
out in the field interacting with our
customers pretty much every day, it
touches on a real concern that
especially SRRES have. When you bring AI
into the conversation,
often times you get the question of is
this thing designed to replace me,
right?
>> There's a certain level of trust that
has to be built into an AI system. And
>> one of the aspects that I think we've
really done from, you know, in my
opinion, just more of an ethical
standpoint is, as Jacob said, we're very
adamant about keeping the human in the
loop. These are not systems that are
designed to replace a human being.
They're designed to amplify their
ability to solve problems.
>> But when you're trying to engage an
engineering team, when you're trying to
get them on board with your philosophy
behind observability, that's really the
key. They want to know that this thing
is not going to put them out of a job.
And for us, that's a big reason why I'm
very proud of the fact that anything we
do with AI, like if you hit the
investigate for me button with our
Agentic systems, it doesn't just tell
you here's the problem. It says, "Here's
the problem, and here is exactly how I
came to this conclusion." It's about
transparency.
>> Yes.
>> And by creating that transparency, we
have a much easier time coming to market
with our customers and building trust
because they see the reasoning, they see
the factuality, they see that it's not
inherently designed to put them out of a
job. But because ultimately at the end
of the day, if the system is down, if
the application is not responding, it
doesn't matter if you or the Instana AI
agent made the change, you're the one
who gets blamed for it. You're the one
on the hook and responsible for the
health and the well-being of the system.
You can't just blame it on AI and and
throw up your hands.
>> At the end of the day, it doesn't matter
what automation tool you're using, there
is still a human being that will be held
accountable.
Drew Flowers and I discussed how Instana builds trust with SREs. A big concern is whether AI will replace them. Instana is adamant about keeping humans in the loop. Their systems amplify problem-solving abilities, not eliminate jobs. Instana's AI doesn't just identify problems; it explains its reasoning. This transparency builds trust because users see the factuality and know it's not designed to displace them. Ultimately, the human is accountable for system health, regardless of the automation tool used. #Kubecon #Automation #Observability #IBMPartner