Loading video player...
I spent an hour using Codex Spark before I read a single thing about it. That hour told me more than any benchmark chart could. OpenAI just shipped Codex Spark — a smaller variant of GPT-5.3 Codex running at 1,000 tokens per second on Cerebras hardware. I tested it blind, coming straight from full Codex on extra-high effort, and what I found reshaped how I think about speed models. In this video I walk through the speed, what happened when I pushed it, and the reframe that made everything click. If you're working with AI coding tools — Codex, Claude Code, Cursor, or anything in that space — this is relevant. Whether you're evaluating Codex Spark for your own workflow or just trying to understand where fast models fit in the bigger picture, this video covers real hands-on experience with a model most people are only reading spec sheets about. Useful for developers, AI tool users, and anyone watching the AI coding landscape evolve. https://openai.com/index/introducing-gpt-5-3-codex-spark/ #CodexSpark #OpenAI #AICoding #GPT5 #AITools 00:00 - Intro 00:22 - Demo 01:31 - Conclusion