Loading video player...
One of the most important recent developments in artificial intelligence (AI) has been the emergence of agentic AI, or AI agents, systems that can act autonomously to plan and carry out tasks. While these systems present many of the same risks as other advanced AI systems, their ability to operate independently introduces new challenges that demand tailored governance and risk-management approaches. Recorded on February 11, 2026, this video features an online panel presented by the AI Security Initiative (AISI) at the UC Berkeley Center for Long-Term Cybersecurity (CLTC). The panel centers on the AISI's "Agentic AI Risk Management Standards Profile" (Agentic AI Profile), a report that examines the unique risks posed by agentic AI, and introduces effective approaches for assessing, managing, and mitigating those risks. The panel explored how agentic AI risk management differs from general-purpose AI risk management, and what it will take to develop and deploy agentic AI systems in a safe and secure manner. This webinar featured a presentation from Deepika Raman, a Non-Resident Research Fellow at CLTC, followed by a discussion moderated by Nada Madkour, a Non-Resident Research Fellow at CLTC. The panel included: - Alan Chan: Research Fellow at Center for the Governance of AI (GovAI) - Dr. Marta Bienkiewicz: Policy and Partnerships Manager at the Cooperative AI Foundation - Benjamin Larsen: Initiatives Lead, AI Systems and Safety, World Economic Forum - Krystal Jackson: Non-Resident Research Fellow, UC Berkeley, CLTC Learn more about the Agentic AI Risk Management Profile at https://cltc.berkeley.edu/publication/agentic-ai-risk-profile/.