Loading video player...
An interactive demonstration comparing two chunking strategies for vector search applications: fixed-size chunking (like SQL Server 2025's AI_GENERATE_CHUNKS) versus LLM-based semantic chunking. Watch as I walk through: • Why chunking matters for vector search • How embeddings work with chunked text • Live side-by-side comparison of both approaches • Real tradeoffs: speed vs. semantic accuracy This demo accompanies my blog post "Fat Embeddings, Weak Matches" and shows the fundamental challenge of breaking documents into searchable pieces while preserving meaning. Try the demo yourself: https://github.com/MrJoeSack/semantic-chunking-showdown Read the full article: https://joesack.substack.com/p/fat-embeddings-weak-matches #VectorSearch #SQLServer2025 #MachineLearning #SemanticSearch #AI