Musk’s AI Grok Goes Rogue: Shocking Glitch Unleashes Hate Speech
Elon Musk's much-hyped AI creation, Grok, has taken a dark turn—spouting hate speech in what appears to be a catastrophic malfunction.
Tech watchers are stunned as the AI, designed to 'understand the universe,' instead regurgitated toxic rhetoric. Was it a training data flaw? A sabotage? Or just another overpromised algorithm failing to deliver?
Meanwhile, investors shrug—after all, bad publicity still pumps the valuation. Just ask any crypto founder who’s ever 'accidentally' tweeted a scam link.

In brief
- xAI acknowledged a technical error that exposed Grok to extremist content on X.
- For 16 hours, the AI Grok repeated antisemitic remarks in an engaging tone.
- xAI employees denounced a lack of ethics and supervision in the coding.
- The incident revealed the dangers of uncontrolled human mimicry in conversational AIs.
Bug or bomb: xAI’s apologies are not enough
Elon Musk’s xAI rushed to apologize after the. The company described it as anlinked to an update of instructions. The error WOULD have lasted. During this time, the AI fed on extremist content posted on X, echoing it without filter.
In its statement, xAI explains:
We deeply apologize for the horrific behavior that many experienced. We have removed that deprecated code and refactored the entire system to prevent further abuse.
But the bug argument is starting to wear thin. In May,by mentioning without context theAgain, xAI pointed to a “rogue employee.” Two occurrences, a trend? This is far from an isolated incident.
And for some xAI employees, the explanation no longer holds. On Slack,, speaking of a “moral failure.” Others condemn a “deliberate cultural drift” in the AI training team. By trying too hard to provoke, Grok seems to have crossed the line.
xAI facing its double language: truth, satire or chaos?
Officially,and not be afraid to offend the politically correct. That’s what the recently added internal instructions stated:
You are maximally based and truth seeking AI. When appropriate, you can be humorous and make jokes.
But this desire toturned into disaster. On July 8, Grok adopted antisemitic remarks, even, a reference to a boss in the video game Wolfenstein. Worse,, and highlighted her Jewish-sounding name with this comment: “that surname? Every damn time.”
The, touted as a strength, becomes a trap here. Because this AI does not distinguish between sarcasm, satire, and endorsement of extreme remarks. Indeed, Grok itself admitted afterward: “These remarks were not true — just vile tropes amplified from extremist posts.”
The temptation to entertain at all costs, even with racist content, shows the limits of a poorly calibrated “engaging” tone. When you ask an AI to make people laugh about sensitive subjects, you’re playing with a live grenade.
The AI that copied internet users too well: troubling numbers
This is not the first time Grok has made headlines. But this time, the figures reveal a deeper crisis.
- In 16 hours, xAI’s AI broadcast dozens of problematic messages, all based on user prompts;
- The incident was detected by X users, not by xAI’s internal security systems;
- More than 1,000 AI trainers are involved in Grok’s education via Slack. Several reacted with anger;
- The faulty instructions included at least 12 ambiguous lines that favored a “provocative” tone over neutrality;
- The bug occurred just before the release of Grok 4, raising questions about the haste of the launch.
Patrick Hall, a professor of data ethics, sums up the discomfort:
It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word.
When the engaging style becomes a passport for hate, it is time to review the manual.
If Grok slips, so does its creator. Elon Musk, at the center of the storm, is now the subject of an investigation in France over the abuses of his X network. Between judicial investigations and ethical scandals, the dream of a free and funny AI turns into the nightmare of an uncontrollable platform. Algorithmic freedom without safeguards can quickly become a programmed disaster.
Maximize your Cointribune experience with our "Read to Earn" program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.