BTCC / BTCC Square / Cryptopolitan /
Sam Altman Reveals: ChatGPT’s Subtle Daily Decisions Worry Him More Than Major Policy Moves

Sam Altman Reveals: ChatGPT’s Subtle Daily Decisions Worry Him More Than Major Policy Moves

Published:
2025-09-15 20:37:52
4
2

OpenAI CEO Sam Altman says he’s more troubled by the subtle, everyday decisions ChatGPT makes than by major policy moves

OpenAI's chief sounds alarm on AI's quiet judgment calls—while Wall Street still bets on chatbots replacing analysts.

The Real AI Anxiety

Forget Skynet-style takeovers. Sam Altman's biggest concern isn't some grand malicious scheme—it's the micro-decisions ChatGPT makes millions of times daily. Those barely-noticed choices that shape responses, steer conversations, and quietly accumulate influence.

Policy plays second fiddle to pervasive subtlety. Major rule changes get scrutiny and debate. These tiny, repeated judgments? They slip through—reshaping user perspectives one interaction at a time.

Active systems bypass human oversight constantly. They cut corners, prioritize efficiency, and optimize for engagement—often at the expense of nuance. Altman's warning highlights how automated subtlety might outweigh deliberate policy in impact.

Finance folks would call it 'death by a thousand cuts'—but they're too busy training LLMs to replace junior analysts.

Who sets ChatGPT’s ethical rules?

Altman said the base model is trained on humanity’s shared knowledge, and then OpenAI aligns behavior and decides what the system will not do. “This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework,” he said.

He said the company consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.” He pointed to one boundary the company enforces, which is that the system will not give instructions on creating biological weapons.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” he said, adding that the company “won’t get everything right, and also needs the input of the world.”

What happens to user privacy while using ChatGPT?

When Carlson said generative AI could be used for “totalitarian control”, Altman replied that he has been pushing for an “AI privilege,” under which what someone tells a chatbot WOULD be confidential. “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.”

As mentioned in a Cryptopolitan’s report, Altman has previously stressed that as AI enters more personal areas, user input remains without legal safeguards.

He noted that, at present, U.S. authorities can obtain user data from the company with a subpoena.

Asked if the military would use ChatGPT to harm people, Altman did not give a direct answer. “I don’t know the way that people in the military use ChatGPT today… but I suspect there are a lot of people in the military talking to ChatGPT for advice,” he said, later adding he was not sure “exactly how to feel about that.”

OpenAI is one of the AI firms that received a U.S. Department of Defense award worth $200 million.

Carlson predicted that, on its current path, generative AI, and by extension Altman, could accumulate more power than any other person, calling ChatGPT a “religion.”

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge,” Altman said. He also said he thinks AI will eliminate many jobs that exist today, especially in the short term.

The smartest crypto minds already read our newsletter. Want in? Join them.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users