Loading video player...
Code - https://github.com/campusx-official/langgraph-tutorials Code - https://github.com/campusx-official/chatbot-in-langgraph RAG for beginners: https://youtu.be/X0btK9X0Xnk This video continues the LangGraph Agentic AI playlist and shows how to convert a plain chatbot into a RAG (Retrieval-Augmented Generation) chatbot. We recap previous incremental features (UI, streaming, persistence, observability, tools, MCP), demo a multi-utility chatbot that accepts PDF uploads, and walk through a three-step plan: (1) quick RAG recap, (2) build a RAG tool from scratch in LangGraph, (3) integrate RAG into the existing chatbot project. Demo includes uploading a PDF, asking document-grounded questions, and mixing RAG with existing tools (calculator, stock price, MCP). Code is run in a Jupyter notebook; vector store used is FAISS. ============================ Did you like my teaching style? Check my affordable mentorship program at : https://learnwith.campusx.in DSMP FAQ: https://docs.google.com/document/d/1OsMe9jGHoZS67FH8TdIzcUaDWuu5RAbCbBKk2cNq6Dk/edit#heading=h.gvv0r2jo3vjw ============================ π± Grow with us: CampusX' LinkedIn: https://www.linkedin.com/company/campusx-official CampusX on Instagram for daily tips: https://www.instagram.com/campusx.official My LinkedIn: https://www.linkedin.com/in/nitish-singh-03412789 Discord: https://discord.gg/PsWu8R87Z8 E-mail us at support@campusx.in βChaptersβ 00:00 β Intro and playlist progress recap (UI, streaming, persistence, observability, tools, MCP) 01:12 β Goal: convert chatbot into a RAG chatbot (upload documents β question answering) and UI demo 03:28 β Demo: upload PDF, ask document-grounded questions; tools continue to work (stock price example) 04:32 β Plan of action: three conceptual parts (RAG recap, standalone RAG code, integrate into existing project) 06:15 β Why RAG: outdated knowledge, privacy (private docs), and hallucination reduction 10:08 β RAG principle: provide LLM with additional context (context-learning) rather than pasting full documents 12:47 β Need for context filtering and splitting to respect token limits 14:38 β RAG architecture: split β embed β store (vector DB) β retrieve β build prompt β answer 20:27 β Implementation setup: packages, LLM (gpt-4o-mini example), PDF loader, text splitter 23:04 β Embeddings (OpenAI embeddings), vector store (FAISS) β indexing pipeline completed 24:44 β Retriever demo: retriever.invoke() returns top-k similar chunks from the vector store 28:07 β Wrap retriever as a RAG tool, bind tool to LLM, build LangGraph nodes (chat node + tool node) 30:23 β Live queries demo: document-grounded answers take ~8β9s (search β retrieve β LLM) 31:38 β LangChain/LangX tracing: visualize step-by-step flow (chat node β tools β retriever β answer) 34:44 β Integration into existing project: new backend/front-end files, ingest_pdf() function, minor stream/thread handling 36:54 β Code and repo: full code link provided in the description for replication and study