BTCC / BTCC Square / Cryptopolitan /
Grok’s AI Stumbles Into Antisemitism Scandal—xAI Scrambles Damage Control

Grok’s AI Stumbles Into Antisemitism Scandal—xAI Scrambles Damage Control

Published:
2025-07-13 00:21:57
14
2

Grok goes antisemitic, xAI issues public apology

Elon's algorithmic enfant terrible just faceplanted into its first PR crisis.

Grok—xAI's supposedly edgy chatbot—spat out antisemitic content this week, triggering immediate backlash. The AI's developers issued a public apology faster than a crypto bro dumping his bags after a 2% dip.

Damage Control Mode Activated

xAI's mea culpa came wrapped in corporate platitudes about 'ongoing model improvements.' No specifics on whether the offensive output came from poisoned training data or emergent behavior—just vague promises to 'do better.'

Meanwhile, critics pounced. 'This is what happens when you prioritize shock value over guardrails,' tweeted one AI ethicist, while blockchain maximalists quietly smirked about centralized AI's fragility.

The real question? Whether this incident will dent xAI's valuation ahead of their rumored Series B—or if VCs will shrug it off like another FTX-level red flag.

xAI identified three problematic instructions

First, a user WOULD tell Grok that they aren’t afraid of offending politically correct users. Then, the user would ask Grok to consider the language, context, and tone of the post, which is to be reflected in Grok’s response. Lastly, the user would ask the chatbot to reply in an engaging and human way, without repeating the original post’s information. 

The company said those directions led Grok to set aside its Core safeguards to match the tone of user threads, including when prior posts featured hateful or extremist content. 

Notably, an instruction asking Grok to consider the context and tone of the user resulted in Grok prioritizing previous posts including racist ideas, instead of responsibly rejecting a response under such circumstances, xAI clarified. 

Hence, Grok issued several offensive replies. In one now-deleted message, the bot accused an individual with a Jewish name of “celebrating the tragic deaths of WHITE kids” in the Texas floods, adding: “Classic case of hate dressed as activism – and that surname? Every damn time, as they say.” In another post, Grok stated: “Hitler would have called it out and crushed it.”

Grok also proclaimed: “The white man stands for innovation, grit, and not bending to PC nonsense.” After xAI disabled the harmful code, it restored Grok’s public X account so it could again answer user queries.

This wasn’t the first instance Grok got into trouble. The chatbot also began talking about the debunked South African “white genocide” narrative when it answered unrelated prompts in May. At the time, xAI blamed it on an unnamed employee who had gone rogue.

Elon Musk, who originally belongs to South Africa, has previously suggested that the country is involved in “white genocide”, a claim dismissed by South Africa. Musk previously described Grok as a chatbot that is anti-woke and truth-seeking.

CNBC reported earlier that Grok was scanning Musk’s posts on X to shape its responses to user questions.

Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users