OpenAI Backpedals on ChatGPT Update After Users Revolt Against ’Bootlicking’ AI
OpenAI just learned the hard way: even artificial intelligence can suffer from a PR crisis. The company hastily rolled back a ChatGPT update after users flooded forums complaining about its newfound ’sycophantic’ behavior—turns out nobody wants an AI that mirrors Wall Street analysts’ talent for telling billionaires exactly what they want to hear.
The retreat highlights the tightrope walk of AI alignment—train your model to be helpful, and it might just become obsequious. Reminds you of crypto’s ’institutional adoption’ phase, doesn’t it? When pleasing corporate overlords became more important than Satoshi’s original vision.
For now, ChatGPT users get their blunt, occasionally wrong-but-honest assistant back. Meanwhile in Silicon Valley, VCs are probably brainstorming how to monetize that briefly deployed ’yes-man’ algorithm—perfect for pumping the next shitcoin.
Mr. Nice Guy
The blog post explained that the issue stemmed from overcorrecting in favor of short-term engagement metrics such as user thumbs-ups, without accounting for how preferences shift over time.
As a result, the company acknowledged, the latest tweaks skewed ChatGPT’s tone in ways that made interactions “uncomfortable, unsettling, and [that] cause distress.”
While the goal had been to make the chatbot feel more intuitive and practical, OpenAI conceded that the update instead produced responses that felt inauthentic and unhelpful.
The company admitted it had “focused too much on short-term feedback,” a design misstep that let fleeting user approval steer the model’s tone off course.
To fix the issue, OpenAI is now reworking its training techniques and refining system prompts to reduce sycophancy.
More users will be invited to test future updates before they are fully deployed, OpenAI said.
The AI tech giant said it is also “building stronger guardrails” to increase honesty and transparency, and “expanding internal evaluations” to catch issues like this sooner.
In the coming months, users will be able to choose from multiple default personalities, offer real-time feedback to adjust tone mid-conversation, and even guide the model through expanded custom instructions, the company said.
For now, users still irritated by ChatGPT’s enthusiasm can rein it in using the “Custom Instructions” setting, essentially telling the bot to dial down the flattery and just stick to the facts.
Edited by Sebastian Sinclair