Loading video player...
Session 1 — Preventing Rogue AI Agents This session explores how enterprises can securely deploy AI agents using: Runtime guardrails Input/output validation AI isolation strategies Least privilege enforcement Human approval workflows Sentinel architectures for supervising AI systems Topics covered: ✅ Prompt injection attacks ✅ AI hallucinations ✅ Runtime policy enforcement ✅ AI “kill switches” ✅ Agent alignment & mission control ✅ Security gateways for LLM systems ✅ Secure AI deployment patterns Key insight: AI agents are powerful — but they cannot yet be trusted without supervision. Session 2 — Microsoft Secure Future Initiative (SFI) The second session covers Microsoft’s Secure Future Initiative and the engineering principles used to build secure-by-design systems at scale. Topics include: ✅ Secure by Design ✅ Secure by Default ✅ Secure Operations ✅ Zero Trust principles ✅ Identity protection ✅ Secret management ✅ CI/CD security ✅ Threat monitoring & detection ✅ Microsoft Sentinel & Defender ✅ Security governance best practices The session also discusses: MFA adoption Supply chain attacks Passwordless authentication Tenant isolation Least privilege access Security telemetry Shift-left security practices