
Engageware: The Future of Agentic AI in Financial Services | Money20/20 USA
Financial IT
Large Language Model (LLM) fine-tuning and AI Agent development are undergoing a critical transformation as researchers seek to maximize efficiency, performance, and scalability. This video breaks down the cutting-edge methodologies and protocols driving the move away from traditional methods like LoRA. LISA & GRAPE: The New Efficiency Kings The sources reveal that Low-Rank Adaptation (LoRA), despite its parameter efficiency, creates an "illusion of equivalence" with full fine-tuning, introducing structural flaws known as "intruder dimensions" and suffering from catastrophic forgetting. New solutions address this head-on: https://arxiv.org/pdf/2510.24702 1. LISA (Layerwise Importance Sampled AdamW): This training strategy surpasses LoRA by over 10%–35% on benchmarks like MT-Bench. LISA achieves memory consumption comparable to LoRA while offering competitive or superior performance to full parameter training (e.g., LLaMA-2-7B uses only 26GB of memory compared to 59GB for full fine-tuning). 2. GRAPE (Distribution-Aligned SFT): This simple, scalable data selection methodology boosts Supervised Fine-Tuning (SFT) performance by selecting responses that align most closely with the base model’s pre-trained distribution (lowest perplexity). GRAPE achieved an absolute performance gain of up to 17.3% over baselines trained on datasets 3 times larger, and generally outperforms training on datasets 4.5 times larger by up to 6.1%. The Agent Network Revolution (A2A, MCP, ADP) For autonomous AI systems to scale, new architectural standards are necessary to break data silos and high collaboration costs. • ADP (Agent Data Protocol): ADP is a light-weight representation language serving as an "interlingua" to unify fragmented agent training datasets. This standardization reduces the engineering effort required to integrate datasets and agent frameworks from Quadratic (O(D x A)) to Linear (O(D + A)), enabling scalable SFT that delivers an average ∼20% performance gain over corresponding base models. ADP also serves as the Agent Description Protocol within the Application Protocol Layer of the Agent Network Protocol (ANP), acting as the agent’s digital business card. • A2A vs. MCP (Complementary Protocols): These are not competitors but work together. MCP (Model Context Protocol) acts as a "universal USB port" for Agent-to-Tool integration (connecting agents to external services and APIs in a standardized way). A2A (Agent-to-Agent), in contrast, enables direct communication and collaboration between autonomous AI agents

Financial IT

Hassan Habib

Pegasus ARC

Mauro

CloudConsultant

Ray Fernando

TURILYTIX

AI Bros Podcast