Loading video player...
MCP Security 101: Protecting Large Language Model Integrations (LLMs) in the Real World Are you building AI integrations using LLMs? Thinking of enabling tools like Claude, ChatGPT, or Gemini to trigger real actions via API? Then you’re already working with MCP (Model Context Protocol), and this is the security session you can’t afford to miss. In this live session, security researcher and educator Corey Ball walks through: ✅ What MCP really is — and why it’s like “USB-C for LLMs” ✅ How to build and vibe-code your own MCP server (yes, even if you’re not a backend dev!) ✅ How MCP can be exploited via directory traversal, prompt injection, and tool poisoning ✅ The Top 5 security risks facing MCP adopters ✅ Real-world examples of MCP supply chain attacks — and how to defend against them ✅ Why AI security = API security (and then some) ✅ How to put the “S” into MCP Whether you’re on the AppSec team, a DevSecOps architect, or a developer building AI-powered apps, this is must-know knowledge if you’re planning to integrate LLMs with third-party tools or internal systems. ⸻ Topics Covered: • Model Context Protocol Explained • MCP Security Fundamentals • Prompt Injection Attacks in AI • Tool Confusion & Misrouting • API vs AI Security • Supply Chain Risk in AI Tool Registries • Security Guardrails for LLM Integrations • Vibe Coding + Generative AI for Infra Based on the free MCP Security Fundamentals course by APIsec University: 👉 https://www.apisecuniversity.com/courses/mcp-security