Loading video player...
My site: https://natebjones.com Full Story w/ Prompts: https://natesnewsletter.substack.com/p/grab-the-6-prompts-i-built-from-my?r=1z4sm5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true My substack: https://natesnewsletter.substack.com/ _______________________ What's really happening inside OpenAI's Codex team and how they actually use their own tools? The common story is that coding agents help engineers write code faster — but the reality is far more expansive. In this video, I share the inside scoop on how OpenAI builds with Codex: • Why every PR at OpenAI gets reviewed by Codex automatically • How designers and non-technical staff ship production code daily • What junior engineers do differently that makes them outperform veterans • Where coding agents are headed when generation becomes solved Chapters: 00:00 Meet the Codex team at OpenAI 02:09 How OpenAI engineers actually use Codex daily 03:21 Power users running multi-agent workflows for hours 04:23 Non-technical staff shipping production code 05:53 Meeting people where they code: CLI, IDE, and web 06:34 Why every PR gets reviewed by Codex automatically 08:00 Code review as superhuman capability 09:22 Codex as ambient intelligence beyond coding 10:42 Why code-trained models excel at business analysis 12:57 Designers submitting PRs: blurring role boundaries 14:25 Job titles matter less, skill sets matter more 16:58 How ideas spawn teams at OpenAI 19:12 Getting frustrated then remembering it's insane 20:45 Why junior engineers thrive with AI tools 23:14 The one quality that matters most right now 25:09 Curiosity beats credentials in the AI age 27:52 Twenty dollars a month for a prolific engineer 30:45 Assigning tickets directly to Codex in Linear 32:09 Bottlenecks shifting from generation to deployment 35:21 Staying fluent when code generation is solved 38:34 Emergent behavior: 13-hour tasks and self-spawning agents 42:39 Codex reading its own SDK and invoking itself 44:06 Memory as an open research problem 46:56 Why simple primitives beat complex scaffolding 49:45 The new challenge: interviewing in the AI age 54:31 How do you know models are actually better 57:43 The future: agents that run forever 1:00:09 How I use AI models outside of coding 1:04:42 Refreshing your mental model of capabilities 1:06:21 GDP-VAL and benchmarks that actually matter Teams that embrace AI as an equalizer and stay radically curious will compound their output — those clinging to credential-based workflows will watch smaller teams ship circles around them. Subscribe for daily AI strategy and news. For deeper playbooks and analysis: https://natesnewsletter.substack.com/