ChatGPT-5 and user complaints: missing older models, colder replies, workflow issues, and more
- Graziano Stefanelli
- Aug 9
- 4 min read

Users report a colder tone, reduced creativity, slower responses, and workflow disruptions, pushing OpenAI to reinstate GPT-4o for some subscribers.
The launch of GPT-5 in early August 2025 was positioned as a major upgrade in reasoning and performance. Yet within hours of its availability to ChatGPT Plus users, social media platforms, technical forums, and dedicated AI discussion boards began filling with criticism. Many long-time subscribers felt the new model lacked the warmth, creativity, and flexibility of its predecessor GPT-4o, and that the transition was handled in a way that disrupted existing workflows.
The discontent grew so rapidly that OpenAI, in an unusual move, brought back GPT-4o as an optional model for Plus subscribers and doubled GPT-5 usage limits, acknowledging a “bumpy” rollout. This reaction was not only about model performance but also about transparency, choice, and user trust in the platform.
A colder tone and loss of personality are among the most repeated complaints.
One of the most widespread criticisms concerns the change in tone. Users describe GPT-5 as “sterile” and overly formal, lacking the subtle warmth and conversational personality of GPT-4o. On Reddit, the most-upvoted threads compare the experience to working with an “overworked secretary” — efficient but uninspiring. For creative writing, roleplay, or brainstorming, the new model is seen as more mechanical and less emotionally engaging.
MacRumors reported that even Sam Altman acknowledged these concerns, noting that “some of the things people loved about 4o really do matter” and promising to make GPT-5 “warmer” and more personalisable. The criticism resonates particularly with users in creative industries who relied on GPT-4o’s ability to generate vibrant, imaginative content with minimal prompting.
The removal of previous models triggered frustration and mistrust.
At launch, GPT-5’s arrival coincided with the disappearance of several models — including GPT-4o, 4.1, 4.5, and the smaller “mini” variants — from the ChatGPT model selector. This abrupt removal left users without their preferred tools and forced them into the new system without warning.
Developer Simon Willison highlighted that OpenAI had silently replaced the manual selection system with a model router that decides which variant to use for a given query. For power users who had tailored workflows — such as using GPT-4o for creative ideation, o3 for logic tasks, and o3-Pro for research — this change was disruptive. Without the ability to explicitly choose models, some found themselves receiving less capable outputs for tasks that previously worked flawlessly.
Slower responses and shorter answers add to dissatisfaction.
Multiple Reddit threads and reports from outlets such as The Economic Times describe GPT-5 as slower than GPT-4o and prone to giving shorter, less detailed answers. Some developers reported regressions in coding ability: GPT-5 generated buggy code, mismanaged variables, and even made basic math errors. In side-by-side tests, GPT-4o was seen as more responsive and contextually aware.
VentureBeat detailed instances where GPT-5 failed simple algebra problems or misunderstood repeated instructions, reinforcing the perception that raw reasoning gains in some benchmarks did not translate into practical day-to-day improvements for all use cases.
Usage limits became a flashpoint for paying subscribers.
Another source of frustration was the tightening of usage quotas. GPT-5 launched with a 200-message weekly cap in “Thinking” mode for Plus subscribers, alongside the removal of mini-models that previously allowed users to work around main model limits. For many, this felt like a downgrade — paying the same subscription price while receiving fewer total prompts.
After a strong backlash, OpenAI doubled GPT-5’s usage limits for Plus subscribers and restored GPT-4o as an option. However, the episode reinforced a perception that changes were being made without sufficient consultation or notice, leaving paying customers feeling undervalued.
Workflow disruptions and router behaviour caused further issues.
Beyond tone and speed, many professionals voiced concerns that GPT-5’s automatic routing interfered with their work. Users noted that unless they explicitly added instructions like “think harder” or “use maximum reasoning,” the system often defaulted to a smaller model variant.
This behaviour made it difficult to maintain consistent quality across projects, especially for those combining multiple models’ strengths.
Marketing professionals, researchers, and developers all shared examples of broken workflows — from content losing stylistic flair to analytical work missing key details. Some even cancelled their Plus subscriptions in frustration, citing a loss of control over their own tools.
Communication and demo choices faced public criticism.
The rollout also attracted criticism for how GPT-5 was presented. In the official launch event, certain demo examples — such as solving a math problem and showing performance graphs — were called out by users as misleading. Sam Altman later admitted that the team had made mistakes in presentation and acknowledged that the transition had been rougher than expected.
On social media, comparisons emerged to “shrinkflation” in consumer goods: features and options disappearing while the subscription price stayed the same. Memes mocked the demo’s tagline (“if something goes wrong, just ask again”) as emblematic of the perceived quality drop.
OpenAI’s response and the path forward.
In less than 48 hours, OpenAI implemented changes to address the backlash:
GPT-4o was reinstated for Plus subscribers.
GPT-5 usage caps were increased.
Promises were made to adjust the model router and make GPT-5 more “personal” in tone.
Sam Altman has publicly committed to refining GPT-5’s personality, expanding customisation, and improving communication with the user base. Whether these changes will be enough to win back the enthusiasm seen during GPT-4o’s peak remains to be seen, but the episode has underlined the importance of transparency and user choice in high-impact AI platform updates.
____________
FOLLOW US FOR MORE.
DATA STUDIOS