Loading video player...
š¹ In this video, we cover Diffusion-Pretrained Dense and Contextual Embeddings from Perplexity AI. š¹ The paper explores diffusion-based pretraining and bidirectional attention for stronger retrieval embeddings. š¹ We break down the difference between pplx-embed-v1 for standard retrieval and pplx-embed-context-v1 for contextualized passage embeddings. š¹ We also review the reported results on MTEB, MIRACL, BERGEN, ToolRet, and ConTEB. š¹ The video highlights why this work matters for long-document retrieval, contextual retrieval, and efficient INT8 embeddings. š¹ A useful overview for anyone interested in embedding models, RAG, multilingual search, and production retrieval systems. #AI #EmbeddingModel #Retrieval #RAG #Perplexity #MTEB #MIRACL #ContextualRetrieval