Loading video player...
GPT-5.5 dropped today. OpenAI is claiming state-of-the-art visual analysis with an 83.2% graduate-level reasoning benchmark. I wanted to see if that actually holds up, so I connected GPT-5.5 to Blender through MCP, built a custom skill in Codex called Blender Photo Reference, and had it build a 3D scene from a single photo off the internet. Then I compared the result against the exact same test I ran last week with Claude Opus 4.7. Same reference image. Same prompt. Same skill structure. Only the model changed. The project folder, the Blender MCP setup, and the skill I built in this video are all linked below. You can download them and run the same test yourself. All stats in this video come straight from OpenAI's GPT-5.5 launch page. Project folder (Blender MCP addon + Photo Reference skill): https://drive.google.com/file/d/1wzg0UyJOv0jXI9iumvbFeadq0q3pUf-N/view?usp=sharing Sources: OpenAI GPT-5.5 launch page: https://openai.com/index/introducing-gpt-5-5/ OpenAI Codex CLI: https://github.com/openai/codex Blender MCP (by @ahujasid): https://github.com/ahujasid/blender-mcp Blender (free): https://www.blender.org/ Timestamps: 0:00 GPT-5.5 just dropped 0:41 OpenAI's visual analysis claims 1:37 The test plan 2:12 Codex setup 2:47 Building the Blender Photo Reference skill 3:13 Installing the Blender MCP addon 4:22 How the skill actually works 5:21 Picking the reference image 5:54 GPT-5.5 builds the scene 7:10 Final render and token usage 7:30 GPT-5.5 vs Claude Opus 4.7 8:59 Why Blender fundamentals still matter 9:53 Outro If you're new here, subscribe for more!