Loading video player...
Title: Shift-Left for LLMs: Securing the AI Model Supply Chain Speaker(s): Nagesh Rathod --- In today’s rapidly evolving AI landscape, Large Language Models (LLMs) are becoming more capable and efficient, yet this advancement introduces new security challenges that traditional SDLC and shift-left approaches do not address. As organizations rush to adopt LLMs, they often overlook critical risks such as model tampering, prompt-based attacks, data leakage, hallucinations, and unsecured inference pipelines. These gaps create an alarming and largely uncharted attack surface. Without proper processes and controls, both the model and sensitive data become vulnerable, making LLM security a critical need rather than an optional consideration. Addressing LLM security requires a holistic, end-to-end strategy rather than reliance on a single tool. The first step is securing the model itself through signing and verification using Sigstore and Cosign, ensuring integrity and provenance, followed by vulnerability scanning with NVIDIA Garak. Guardrails around model interactions—such as moderation filters, PII detection, hallucination checks, and pre/post prompt screening—help prevent unsafe prompts, malicious injections, and harmful model outputs. Beyond safeguarding the model, securing inference traffic is equally important. Envoy can serve as the controlled API gateway to enforce authentication, rate-limiting, and protection against external threats, while Istio adds a zero-trust layer within the cluster through secure service-to-service communication and enhanced observability(service mesh istio). Completing the security posture, LLM red teaming introduces structured adversarial testing with attack corpora including prompt injections, jailbreak attempts, and data-exfiltration prompts, which can be continuously executed as regression tests to ensure ongoing robustness. Attendees will gain practical, comprehensive knowledge of how to secure LLM systems in real-world production environments. They will learn about the unique risks introduced by modern LLMs, how to build a secure LLM supply chain, implement effective guardrails, protect API and cluster-level communication, and incorporate red teaming techniques tailored for LLMs. By exploring the processes, tools, and best practices essential for production-grade LLM security, attendees will leave with a clear roadmap for deploying and operating LLMs safely, reliably, and at scale. --- Full schedule, including slides and other resources: https://pretalx.devconf.info/devconf-in-2026/schedule/