Loading video player...
Most teams review application code carefully, then let *Helm*, *YAML*, and AI-generated infrastructure changes slip through with far less scrutiny. That gap shows up later in production, when configuration, testing, and governance are all mixed. In this interview, *Andrew Block* explains why Kubernetes teams need better boundaries between code and configuration, stronger review practices for AI-assisted changes, and a clear *human in the loop* before anything reaches production. In this interview: - Why *Helm* and *YAML* changes often avoid the same review standards as application code - What good governance looks like when AI writes more of your Kubernetes and delivery logic - How to evaluate production-bound AI recommendations with testing, variance, and least-privilege thinking Find all the links and info for this interview here: https://ku.bz/-q_FYPGj8 CHAPTERS ========= 00:00 Introduction 00:11 Where Quality Breaks Down 00:40 Why YAML Gets Less Scrutiny 01:28 AI Governance For Kubernetes Code 02:04 Where AI Review Fails 02:44 Trusting Production AI Changes 03:09 The Next Era Of Tooling 03:51 Building Safer AI 04:15 Zero Trust For AI 04:56 Where To Follow Andy