Loading video player...
What If Your AI Agent Is the Attacker? | Securing Web3 AI Toolchains š¤ The next multi-million dollar DeFi hack won't need a bug in your smart contract. It just needs one injected prompt. As the industry shifts toward autonomous, transaction-signing AI agents, the attack surface has fundamentally changed. Join Stephen Ajayi, Leading Offensive Security Engineer at Hacken, for a deep-dive technical session on the Agentic Kill Chain. We demonstrate how prompt injection hijacks toolchains to drain wallets and the defensive architecture required to stop them. š Key Insights: The Full Kill Chain: How a malicious instruction moves from Inject ā Hijack ā Abuse ā Broadcast. Web3 Attack Surfaces: Why token metadata, governance proposals, and on-chain logs are the new malware carriers. 6 Attack Patterns: Detailed look at RAG poisoning, cross-agent hijacking, and payload splitting. 7 Layers of Defense: Implementing a Transaction Policy Gateway to turn a compromise into a blocked attempt. Case Studies: Technical post-mortems of the Drift Protocol ($285M) and Resolv Protocol ($23M) incidents. š¤ About the Speaker: Stephen Ajayi is the Leading Offensive Security Engineer at Hacken, specializing in the intersection of LLM vulnerabilities and decentralized infrastructure. š”ļø About Hacken Hacken is a global leader in blockchain security, trusted by 1,500+ partners including the European Commission, MetaMask, and the Ethereum Foundation. We deliver provable assurance through AI-powered offensive security and real-time monitoring. š Website: https://hacken.io/ š X (Twitter): https://x.com/hackenclub #Web3Security #AI #DeFi #PromptInjection #BlockchainSecurity #Hacken #SmartContractAudit #LLMSecurity #CyberSecurity2026