Loading video player...
In this video, I'm going to show you the latest features from Azure AI
Foundry, leveraging open telemetry semantic commissions to make multi-agent observability
easy. As agents and multi-agent systems gain popularity,
they imposes new risks. On quality and performance,
you want your agents to reply accurate and coherent answers that are grounded by
truth. On the security and safety side,
you want your agents to avoid harmful content and sensitive information and are free
of attacks like prompt injection. That is why
observability is key to agent development.
At pre-production phase, developers need to iterate quickly on code and prompts.
Traces and metrics allow them to easily debug and understand their agents.
After agents are deployed, developers want to monitor their agents for cost management,
life-set issues, or new security risks. Across the agency development cycle,
you want to run evaluation continuously to avoid behavior drift and ensure safety.
The open telemetry genitive AI semantic conventions are designed to capture AI-related
traces and events in a framework and platform-agnostic way.
It is the open source standard in genitive AI application instrumentation.
Every AI boundary works closely with the open telemetry community to expand the
genitive AI's conversions to cover even broader scenarios including multi-agent systems.
Because Azure AI Foundry are 100% compliant with the genitive AI conversions,
developers can debug, trace and monitor their agents on Azure AI Foundry,
regardless of how their agents are viewed. Now let's
take a look at how tracing works in action with the brand new Microsoft
agent framework. With the new Microsoft agent framework,
you can enable tracing and start to send traces to your Foundry project with
just a few lines of code.
The Android Foundry portal renders genitive AI traces beautifully in tributes.
You can see traces like agency invocation, LRM calls,
or two execution with rich information including latency and token
consumption. We have expanded
the agent invocation schema to include detailed tool information.
With this actual information, evaluation can run purely on an agent's invocation trace.
The evaluation results are attached to the original trace as events.
Having the trace and evaluation results together,
dramatic lay helps with debudability and monitoring.
Knowing developers wants to have the flexibility of using their preferred agent
framework, the Azure AI Foundry team has created instrumentation packages to help
popular agent frameworks to emit genitive AI semantic collision-compliant traces.
At lunch, we'll support semantic kernel.
Lanching. Lengrap.
And the OpenAI agent SDK.
We truly believe the new tools and features we are announcing will help developers
to be more efficient, productive, and insightful in agent development.
Microsoft is enhancing multi-agent observability by introducing new semantic conventions to OpenTelemetry, developed collaboratively with Outshift, Cisco’s incubation engine. Azure AI Foundry now delivers observability for agents built with different frameworks, including LangChain, LangGraph, and OpenAI Agents SDK. Learn more in this blog: https://msft.it/6059sSg7R #Microsoft #MicrosoftAzure #OpenTelemetry