OpenAI Overhauls ChatGPT Model Selection, Streamlining User Experience Today
Photo by Zulfugar Karimov (unsplash.com/@zulfugarkarimov) on Unsplash
OpenAI now shows just three tiered options—Instant, Thinking and Pro—when users pick a model in ChatGPT, with a dropdown for specific versions like “Latest” (5.4), The‑Decoder reports.
Key Facts
- •Key company: OpenAI
OpenAI’s redesign of the ChatGPT model picker is the most visible change to the consumer interface since the platform’s 2024 UI overhaul, according to The Decoder. Rather than scrolling through a long list of version numbers, users now see three tiered options that correspond to their subscription level: Instant for rapid, low‑latency replies; Thinking for more nuanced or multi‑step tasks; and Pro for the highest‑capacity models. A secondary dropdown lets subscribers select a specific version—“Latest” (currently 5.4), 5.2, 5.0, or o3—while a deeper “Configure” menu preserves granular controls such as the legacy Auto routing toggle, which automatically escalates a request from Instant to Thinking when the system detects heightened complexity.
The move addresses long‑standing criticism of OpenAI’s routing logic, which many users described as “opaque” when it first launched. The router, which decides which model processes each query, was accused of steering expensive requests toward cheaper back‑ends to conserve compute budget. By surfacing the tiered choices up front, OpenAI hopes to make the cost‑performance trade‑off transparent and give power users a clearer lever for managing latency versus capability, The Decoder notes.
In parallel with the UI shift, OpenAI is rolling out incremental upgrades to the underlying models. The company announced a “mini” variant of GPT‑5.4 aimed at edge‑device workloads, and a refreshed GPT‑5.3 Instant that, per the internal changelog, now uses “less sensationalized wording” to improve factual tone. These tweaks arrive alongside the broader GPT‑5 family, which TechCrunch highlighted in its coverage of the latest Codex upgrade. The new Codex version, built on GPT‑5, promises faster code generation and tighter integration with development tools, reinforcing OpenAI’s push into the AI‑assisted programming market.
VentureBeat and ZDNet both reported that the GPT‑5.3‑Codex release delivers a 25 % speed boost over its predecessor, with the Codex team claiming the model even contributed to its own codebase. This performance gain dovetails with OpenAI’s strategy to differentiate its premium Pro tier, positioning it as the go‑to option for developers and enterprises that need the most powerful, up‑to‑date model while still offering the lighter Instant and Thinking tiers for everyday consumer use.
Overall, the streamlined model selection and the incremental model improvements signal OpenAI’s effort to balance user experience with the economics of large‑scale inference. By making the tiered system explicit and giving users a simple way to opt into higher‑capacity models, the company hopes to quell lingering doubts about hidden routing decisions while capitalizing on the growing demand for both rapid, casual interactions and heavyweight, enterprise‑grade AI workloads.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.