Loading video player...
Welcome to Chapter 7, Part 3 of our Learning series on AI and Cybersecurity by KK Mookhey! In this installment, we dive deep into the unique security challenges that emerge when deploying agents built with large language models (LLMs) and agentic AI frameworks. Building on concepts from Parts 1 & 2 - where we explored multi-agent construction using Crew AI and attack vectors like prompt injection and infinite loops, this video reveals why these threats become exponentially more dangerous in multi-agent environments. Key Topics Covered • Vulnerability Statistics & Research • 82.4% of LLMs vulnerable to inter-agent communication attacks • 52% susceptible to RAG backdoor attacks • Only 5.9% resistant to all attack vectors Core Security Challenges • User context vs. system context mixing • Implicit trust between agents creating attack surfaces • Agent-to-agent prompt injection weaponization • Privilege escalation through agent chaining • Memory poisoning and context contamination • Infinite loops causing resource exhaustion Attack Scenarios Explored • Agent-to-agent prompt injection as weapons • Confused deputy attacks • Forged requests and privilege escalation • Manager-worker pattern vulnerabilities • Semantic and incremental poisoning • Goal manipulation and context corruption The Agentic AI Security Framework Introducing the newly published Agentic AI Security Framework (collaboration between Google and other companies): Behavior certificates and enforcement Authenticated prompts with security boundaries Isolation of untrusted user inputs Security meta-instructions Domain-specific languages for policy enforcement Guard agents and behavior enforcement tools Essential Security Principles ✓ Zero trust between agents ✓ Least privilege by design ✓ Sandboxing and containerization ✓ Defense in depth ✓ Rigorous input/output validation ✓ Comprehensive audit trails and logging ✓ Resource limits and circuit breakers Resources Referenced 📄 Research Paper on LLM Vulnerabilities: https://arxiv.org/html/2507.06850v3 🔗 Agentic AI Security Framework: https://a2as.org Why This Matters As multi-agent AI systems become more prevalent in production environments, understanding these security dynamics is critical. The autonomy agents have to call each other without validation, combined with access to external tools and shared ecosystem trust, creates a perfect storm for sophisticated attacks. Early integration of security guardrails can save tremendous costs and prevent costly production vulnerabilities. Series Overview This is Part 3 of Chapter 7: Part 1: Building Multi-Agent Systems with Crew AI - https://www.youtube.com/watch?v=D3-15ds44KQ Part 2: Hacking Multi-Agent Systems (Attack Vectors) Part 3: Securing Multi-Agent Systems (This Video) Watch all chapters of AI and Cybersecurity Learning series here - https://www.youtube.com/watch?v=D8CWFwYRJMM&list=PLXVUBNOa2d7YyqWr_DgUHw7RwQLE7P24m&index=1 About the Instructor: KK Mookhey - 25+ years cybersecurity expertise. Learn MCP protocol, understand risks, build securely from day one. Connect with KK on https://www.linkedin.com/in/kkmookhey/ If you're building multi-agent AI systems, start integrating these security guardrails from the design phase—not later in development. The investment upfront saves massive headaches in production. Timestamps 00:00 - Introduction 00:43 - Vulnerability Statistics 01:17 - Why Multi-Agent Attacks Are More Dangerous 02:30 - User Context vs. System Context 03:45 - Agent-to-Agent Prompt Injection 05:01 - Privilege Escalation & Authorization Gaps 06:59 - Memory Poisoning & Context Contamination 08:29 - Introducing the Agentic AI Security Framework 09:51 - Framework Components & Controls 11:18 - Core Security Principles 11:56 - Conclusion & Best Practices #MultiAgentSystems #AI #Cybersecurity #CrewAI #AgenticAI #MachineLearning