Loading video player...
Clawdbot (now Moltbot) is the hottest AI project right now - 44K GitHub stars, everyone building their own "Jarvis." But as a security engineer, I see something different: a Remote Code Execution vulnerability that passes the Turing Test. In this video, I break down Indirect Prompt Injection—the #1 vulnerability on the OWASP LLM Top 10—and show exactly how an attacker could compromise your "local" AI assistant. ⏱️ TIMESTAMPS: 0:00 - The Hook 0:35 - What is Clawdbot/Moltbot? 1:35 - The Attack (Animated) 3:05 - The Intern Analogy 3:35 - Kill Chain Breakdown 4:35 - The Payload 5:35 - Why This is Hard to Fix 6:20 - How to Protect Yourself 6:50 - Final Thoughts 🔗 RESOURCES: - Moltbot (formerly Clawdbot): https://github.com/moltbot/moltbot - OWASP LLM Top 10: https://owasp.org/www-project-top-10-for-large-language-model-applications/ #AI #Security #Clawdbot #Moltbot #PromptInjection #CyberSecurity #LLM Cloudbot, a popular open-source assistant, is gaining traction, but this video raises serious concerns. It highlights that this local AI tool, despite its popularity, functions as a remote command execution vulnerability, posing a significant risk to cybersecurity. Learn about this critical information security issue and why users should be wary of granting it access to their systems.