Loading video player...
In this second part of the Generative AI Foundations series, we go one level deeper and understand how AI models process input and understand meaning. In Part 1, we started with the big picture of Generative AI. Now in Part 2, we focus on important concepts like tokens, tokenisation, embeddings, vectors, semantic similarity and vector search. You will learn why AI models do not process text like humans, how text is broken into tokens, why tokens affect cost, speed and context window, and how embeddings convert meaning into numbers. We will also understand how vectors and semantic similarity help AI systems find the right information from private documents. These concepts are the base of real AI applications like RAG, AI search, recommendations and intelligent assistants. If you want to understand how an AI chatbot finds answers from your own data, this video will help you build the right foundation. Timeline 00:00 Introduction 01:10 Tokens and Tokenisation 04:51 Embeddings 07:59 Vector Representations 09:25 Semantic Similarity 11:57 Outro