
Engageware: The Future of Agentic AI in Financial Services | Money20/20 USA
Financial IT
We’re transitioning from AI providing information to AI actually doing work. Discover what makes a system truly "agentic"—it must plan tasks, take action, and observe outcomes to determine if work is complete. Learn through real examples like deep research agents, and understand critical risks including hallucination propagation, failure to stop, incomplete reasoning, and tool misuse. Key concepts covered: - The three requirements for agentic AI: plan, act, observe - Why simple tool calling isn’t true agency - Real example: ChatGPT’s deep research with 61 web searches - Critical risks: hallucination loops, infinite purchasing, medical triage failures - When to use agents (low-stakes, repetitive tasks) vs. when to avoid them (high-stakes decisions) - Balancing automation benefits with responsible oversight Other videos in this series: This is Key 5 of 8, synthesizing concepts from Keys 1-4. Next, explore Key 6 on privacy considerations with open-source models, or continue through the complete series including the Eisenhower Framework for AI decision-making. Who this is for: Business leaders, developers, and professionals evaluating agentic AI tools or building autonomous systems. Essential for understanding both capabilities and limitations. #AgenticAI #AIAgents #DeepResearch #AIRisks #ResponsibleAI