Loading video player...
In this video, we do a full deep dive into fine-tuning in Microsoft Foundry (Azure AI Foundry). If you are building real AI agents for business, you already know the biggest problem with generic models. They sound confident, but they guess when they do not know your product or policies. So instead of writing bigger prompts, we fine-tune properly inside Microsoft Foundry, end-to-end. What you will learn in this deep dive: When fine-tuning makes sense and when RAG is better Supervised Fine-Tuning (SFT) to teach domain knowledge Direct Preference Optimisation (DPO) to align tone and brand voice Tool calling fine-tuning to make the model take real actions How the full workflow looks: data preparation, training, evaluation, deployment Cost and scaling considerations for production usage This is practical and hands-on, focused on building a reliable AI support agent that is consistent, on-brand, and capable of actions, not just chatting. Github - https://github.com/Shailender-Youtube/Fine-Tuning-Microsoft-Foundry 0:00 – Introduction 2:24 – RAG vs Fine-Tuning 5:45 – Synthetic Data Generation 13:39 – Supervised Fine-Tuning (SFT) 25:05 – Direct Preference Optimisation (DPO) 32:18 – Tool Calling Fine-Tuning 41:16 – Cost