Loading video player...
Configuring the Semantic Kernel hinges on the principle of **Abstraction**. In AI engineering, we rarely interact with a Large Language Model (LLM) directly via raw HTTP requests; instead, we interact with an abstraction layer that standardizes the chaos of disparate AI providers into a predictable, programmable interface. This subsection establishes the architectural philosophy required to decouple your application’s core logic from the specific AI service provider, enabling seamless transitions between high-scale cloud services like Azure OpenAI and local, privacy-centric models like Ollama. 00:00 Image: C# & AI Masterclass by Edgar Milvus Volume 8, chapter... 00:04 Now, let's explore Chapter 2: Configuring the Kernel - Azure... 04:10 Code Section 06:42 Why this matters: If you pass a mutable configuration object... 08:00 Image: A request flows from the Application Logic through the Kernel—acting... 08:07 Let's discuss Deep Dive: Token Management and Latency Implications. Configuration... 10:40 Code Section