Loading video player...
How can we monitor, evaluate, and maintain control over increasingly capable AI systems? This breakout session from IASEAI '26 brings together researchers and practitioners working on the technical and governance challenges of monitoring, control, and safety in advanced AI systems—from theoretical limits to practical deployment frameworks. The session explores core questions around loss of control, agentic risk, evaluation protocols, and safety guarantees across both current and emerging AI systems. Antoine Maier — Take Goodhart Seriously: Principled Limit on General-Purpose AI Optimization Annika Hallensleben — Loss of Control: Degrees, Dynamics and Preparedness Shaun Khoo — With Great Capabilities Come Great Responsibilities: Introducing the Agentic Risk & Capability Framework for Governing Agentic AI Systems Himanshu Joshi — Governance and Security-by-Design: Embedding Safety and Alignment into Agentic AI Systems Usman Anwar — Analyzing and Improving Chain-of-Thought Monitorability Through Information Theory Charlie Griffin — Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols Donggeon David Oh — Provably Optimal Reinforcement Learning under Safety Filtering Vincent Conitzer — Shutdown Safety Valves for Advanced AI 📍 Recorded at UNESCO Headquarters, Paris About IASEAI The International Association for Safe and Ethical AI (IASEAI) is a global professional association bringing together researchers, policymakers, and practitioners working to ensure that advanced AI systems operate safely and ethically for the benefit of humanity. Learn more: https://www.iaseai.org #AI #AISafety #AIGovernance #AIAlignment #IASEAI #ArtificialIntelligence #ResponsibleAI