OpenAI launched ChatGPT-5 on Tuesday with a unified architecture that eliminates the separate "o-series" reasoning lineup the company had maintained since late 2024. The change ends what was, by widespread agreement, the worst piece of consumer UX in any frontier AI product: the model picker, which forced users to choose between GPT-4o, o3, o4-mini, GPT-4.5, and several other variants whose differences were rarely clear even to expert users.
Under the new system, ChatGPT decides for itself whether a query warrants extended reasoning. Simple questions ('what's the capital of France') get answered in milliseconds; complex multi-step problems trigger the model to produce intermediate reasoning traces before responding. Users who want to force one mode or the other can do so with a toggle, but the default — and what 95% of users will see — is automatic.
How it actually performs
The benchmark numbers are competitive but not dominant. On SWE-Bench Verified, GPT-5 scored 69.4%, a meaningful jump from o3's 61.2% but still trailing Claude Opus 4.7 by about two points. On the harder ARC-AGI-2 reasoning benchmark, OpenAI's evaluation puts GPT-5 at 32%, which would be the new state of the art if confirmed by independent labs. The bigger story, though, is the latency improvement: even when GPT-5 chooses to engage extended reasoning, time-to-first-token is roughly 40% faster than o3 was at equivalent depth.
Pricing is the surprise. The flagship tier of GPT-5 is available to all Plus subscribers at the existing $20/month price point, with no rate-limit changes. The previously separate Pro tier ($200/month) now includes only access to GPT-5 with extended context, plus priority compute during peak hours. The pricing realignment looks like a direct response to Google's free Gemini 3 Ultra rollout last week — though OpenAI denies the timing was strategic.
What got cut
The o-series brand is gone. So are GPT-4o, GPT-4.5, GPT-4.1, and the various preview models that had accumulated over the past 18 months. API users will have a six-month deprecation window to migrate, but the consumer-facing UI was switched over instantly. Some power users have already complained that specific behaviors they relied on — particularly o3-mini's terse outputs — are now harder to elicit. OpenAI's response, paraphrased: the old behaviors are still available via system prompts; you just have to ask.