Loading video player...
Today almost every AI application uses RAG, but traditional RAG often depends on chunking, embeddings, and similarity search. That works well in many cases, but it can struggle with long structured documents, references, context loss, and relevance issues. In this video, I explain a new approach called Vectorless RAG, also known as Reasoning-Based RAG. We will understand how frameworks like PageIndex build a document tree with nodes, use an LLM to reason over structure, and retrieve answers without embeddings or a vector database. I’ll also compare Vector RAG vs Vectorless RAG and show where each approach works best. If you are building AI applications with Azure AI Foundry, LangChain, OpenAI, or enterprise RAG systems, this is a topic you should understand. https://github.com/Shailender-Youtube/Vectorless-Rag https://github.com/VectifyAI/PageIndex