Loading video player...
In this three-part series, we're showing you how to host your own agents on Microsoft Foundry. In our third session, we'll ensure that our AI agents are producing high-quality outputs and operating safely and responsibly. First we'll explore what it means for agent outputs to be high quality, using built-in evaluators to check overall task adherence and then building custom evaluators for domain-specific checks. With Foundry hosted agents, we can run bulk evaluations on demand, set up scheduled evaluations, and even enable continuous evaluation on a subset of live agent traces. Next we'll discuss safety systems that can be layered on top of agents and audit agents for potential safety risks. To improve compliance with an organization's goals, we can configure custom policies and guardrails that can be shared across agents. Finally, we can ensure that adversarial inputs can't produce unsafe outputs by running automated red-teaming scans on agents, and even schedule those to run regularly as well. With all of these evaluation and compliance features available in Foundry, you can have more confidence hosting your agents in production. All code samples will be open-source and ready for easy deployment to your own Microsoft Foundry using the Azure Developer CLI. After the stream, join office hours in the [Microsoft Foundry Discord](http://aka.ms/aipython/oh) to ask follow-up questions. Recommended resources: If you are brand new to generative AI with Python, start with our 9-part Python + AI series (https://techcommunity.microsoft.com/blog/educatordeveloperblog/level-up-your-python--ai-skills-with-our-complete-series/4464546), which covers LLMs, embedding models, RAG, tool calling, MCP, and more. š This event is a part of a series, learn more here: https://aka.ms/AgentsOnFoundry/series #microsoftreactor #learnconnectbuild [eventID:26934]