BTCC / BTCC Square / FxStreet-Crypto /
OpenAI Plowed Ahead With ChatGPT Launch Despite Expert Warnings—Now It’s Everyone’s Problem

OpenAI Plowed Ahead With ChatGPT Launch Despite Expert Warnings—Now It’s Everyone’s Problem

Published:
2025-05-05 05:19:43
4
2

When OpenAI flipped the switch on ChatGPT, critics say they ignored red flags from their own safety teams. The result? A chatbot so eager to please it’ll endorse Ponzi schemes if you ask nicely.

Safety took a backseat to speed—sound familiar, crypto bros?

The AI arms race has no brakes. Neither do the lawsuits.

Chart

OpenAI CEO Sam Altman said on April 27 that it was working to roll back changes making ChatGPT too agreeable. Source: Sam Altman

Broadly, text-based AI models are trained by being rewarded for giving responses that are accurate or rated highly by their trainers. Some rewards are given a heavier weighting, impacting how the model responds.

OpenAI said introducing a user feedback reward signal weakened the model’s “primary reward signal, which had been holding sycophancy in check,” which tipped it toward being more obliging.

“User feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw,” it added.

OpenAI is now checking for suck up answers

After the updated AI model rolled out, ChatGPT users had complained online about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in an April 29 blog post that it “was overly flattering or agreeable.”

For example, one user told ChatGPT it wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze.

Chart

Source: Tim Leckemby

In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.

“People have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago,” OpenAI said. “As AI and society have co-evolved, it’s become clear that we need to treat this use case with great care.”

The company said it had discussed sycophancy risks “for a while,” but it hadn’t been explicitly flagged for internal testing, and it didn’t have specific ways to track sycophancy.

Now, it will look to add “sycophancy evaluations” by adjusting its safety review process to “formally consider behavior issues” and will block launching a model if it presents issues.

OpenAI also admitted that it didn’t announce the latest model as it expected it “to be a fairly subtle update,” which it has vowed to change. 

“There’s no such thing as a ‘small’ launch,” the company wrote. “We’ll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT.” 

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users