Loading video player...
Agentic AI systems, which autonomously generate code and take actions based on user input, introduce a critical security challenge where the resulting code must be considered untrusted. The NVIDIA AI red team highlighted this systemic risk by identifying a Remote Code Execution (RCE) vulnerability (CVE-2024-12366) in an AI-driven analytics pipeline that converted natural language queries into executable Python code. While security teams often implement sanitization techniques—such as filtering or modifying code before execution—these defenses are inherently insufficient, as determined attackers can craft inputs that exploit trusted library functions, manipulate runtime behaviors, and evade static filters. Therefore, the sources conclude that **sandboxing** the execution environment is mandatory and essential for containment. By isolating each code execution instance, sandboxing ensures that any malicious or unintended code path is contained, structurally enforcing execution safety and limiting the potential impact or "blast radius" to a single session or user context. https://developer.nvidia.com/blog/how-code-execution-drives-key-risks-in-agentic-ai-systems/