Loading video player...
In this video, I demonstrate a practical example of evaluating a Retrieval-Augmented Generation (RAG) system using RAGAS. When building RAG applications with large language models, one of the biggest challenges is evaluation. How do we know if the system is retrieving the right documents and generating reliable answers? RAGAS is an open-source framework designed specifically to evaluate RAG pipelines. It provides automated metrics that help measure the quality of responses without requiring manual annotations. [00:00] What is RAGAS? ]00:36] What Data Does RAGAS Need? [00:48] RAGAS Evaluation Metrics Explained [00:58] RAGAS Demo (Step-by-Step) [02:37] Understanding the Evaluation Results