Loading video player...
⚡Mark Attendance & Submit Assignment at One Place → https://luc.to/adqajan26d1 Join GenAI Discussion group to get daily updates→ https://luc.to/gaiwa In this advanced GenAI workshop, you’ll build a production-style Document Q&A app powered by embeddings and Retrieval-Augmented Generation (RAG). Designed for developers who already know basic LLM prompts, this session takes you end-to-end: ingest documents, create embeddings, store vectors in Redis, and serve answers through a FastAPI backend. You’ll add real-time streaming responses, containerize everything with Docker, and build a clean Next.js + TypeScript front-end. We’ll also discuss multi-document RAG patterns, options for scalable storage like OceanBase, local RAG setups for privacy, and what it takes to make this enterprise-ready. By the end, you’ll have a serious, portfolio-ready Document Q&A app and a solid understanding of modern RAG architecture. Subscribe for more free workshops and GenAI mini projects.