Loading video player...
An end-to-end walkthrough of Braintrust, the AI observability platform for building quality AI products. Using a customer support chatbot example, I show you how to use evals and observability to build trust in your AI systems. 0:00 Intro: The AI iteration problem 0:24 What is Braintrust: evals + observability 0:38 Demo setup: SunshineCo support chatbot 0:57 Setting up AI providers in Braintrust 1:20 Building your first eval in the playground 2:07 Using AutoEvals and LLM-as-a-judge scorers 2:35 Brand alignment scorer with chain of thought 3:08 Viewing traces and understanding scores 3:30 Prompt versioning and collaboration 4:10 Diff mode: comparing experiment runs side-by-side 4:47 Reverting to previous prompts 5:02 Observability: monitoring production traffic 5:28 Live demo: real-time feedback collection 6:06 Online scoring with sampling rates 6:38 Loop: Braintrust's built-in AI assistant 7:08 Custom views and monitoring dashboards 7:29 Outro Homepage 👉 https://www.braintrust.dev/home Changelog 👉 https://www.braintrust.dev/docs/changelog GitHub 👉 https://github.com/braintrustdata/ Discord 👉 https://discord.gg/braintrust Newsletter 👉 https://www.braintrust.dev/newsletter X/Twitter 👉 https://x.com/braintrust YouTube 👉 https://youtube.com/@BraintrustData LinkedIn 👉 https://linkedin.com/company/braintrust-data