Loading video player...
Most people hear “AI” and think prompts and chatbots. This episode goes deeper into the infrastructure layer that quietly powers it all: embeddings. Eden talks with Gilad Lotan, Head of AI/Data Science & Analytics at BuzzFeed, about how turning text (and behavior) into vectors unlocks search that actually understands meaning, personalization that works with less data, and brand-safe monetization without heavy PII. We break down the math in human terms (cosine similarity, “king − man + woman ≈ queen”), show how BuzzFeed uses embeddings across hundreds of millions of content items, and explore what happens when you cluster not just content-but people by taste. We also dig into cost realities (vector DBs vs. model inference), why you don’t need to build a foundation model to be an AI company, and the adtech future when “intent” shifts from keywords to user embeddings. If you’re an entrepreneur or operator wondering where the real product leverage is in AI- this is the layer to master. What you’ll learn: - Embeddings 101: why vectors beat tags/keywords for meaning - Recs that improve with less signal (and fewer privacy headaches) - Clustering users by taste vs. averaging away what makes them unique - How user embeddings could rewrite adtech beyond keyword intent - Cost & stack gotchas: vector stores, update cadence, where to spend - Multimodal on the horizon and when text alone is enough Please rate this episode 5 stars wherever you stream your podcasts!