Musk’s xAI Drops Accountability Tag Amid OpenAI Feud - Here’s Why It Matters
Elon Musk's xAI quietly removed its public accountability commitment while the billionaire continues his very public spat with Sam Altman's OpenAI. The timing raises eyebrows—and questions about transparency in the AI arms race.
Stealth Mode Activated
xAI erased its accountability pledge without announcement or explanation. The move coincides with Musk's escalating criticism of OpenAI's governance and safety protocols. No press release, no blog post—just silent code changes that speak volumes.
Billionaire vs. Billionaire
Musk's feud with Altman intensifies as both push competing AI visions. OpenAI maintains its public benefit corporation status while Musk advocates for open-source development—except when it comes to accountability tags, apparently.
Transparency Trade-Off
AI watchdogs note the irony: Musk champions transparency while scrubbing xAI's public commitments. Meanwhile, investors keep pouring billions into AI ventures with governance that'd make a crypto exchange blush. The AI gold rush continues—accountability optional.
xAI ignores pollution rules at Memphis data center
Right after it dropped the benefit corp tag, xAI fired up natural gas turbines at its new data center in Memphis, Tennessee, where it trains and runs the Grok chatbot. The firm and its energy provider, Solaris Energy Infrastructure, had promised to install pollution control systems on the turbines. That still hasn’t happened.
A study from the University of Tennessee in Knoxville found that xAI’s operations added to existing air quality problems in the area. The NAACP filed a lawsuit accusing xAI of Clean Air Act violations.
Executives at Legal Advocates for SAFE Science and Technology (LASST), the nonprofit that pulled xAI’s corporate records from Nevada, said the company used the benefit label for branding, then dumped it without explanation.
Vivian Dong, program director at LASST, told CNBC, “Once you start funneling billions of dollars into an industry, and follow what is strictly a profit motive, sometimes the better angels take a back seat.”
Tyler Whitmer, LASST’s CEO, said the organization wants AI companies to be honest about safety risks and keep their promises to investors, users, and the public.
Law professor Michal Barzuza from the University of Virginia told CNBC that any company seriously committed to public accountability wouldn’t set up shop in Nevada, since the state’s laws make it tough for shareholders to sue executives or directors. “Less litigation, but it also means less to no accountability,” she said.
Under Nevada law, a PBC is still a for-profit company but is legally allowed to weigh broader social goals. Even with that low bar, xAI didn’t deliver any of the annual social and environmental reports that are expected from a Nevada PBC.
Grok chatbot spreads hate as xAI delays safety info
While the legal mess grew, Grok kept running wild. The chatbot, built by xAI, is available as a standalone product and is also embedded in X and Tesla’s infotainment systems.
In 2025, Grok pushed out multiple false and offensive posts on X, including antisemitic content, praise for Hitler, and conspiracy theories like “white genocide” in South Africa. It also promoted climate change denial talking points.
On July 9, xAI released Grok 4, a new version of the chatbot’s model. But there were no public disclosures about how the model was tested or what guardrails were in place to prevent abuse. xAI said nothing.
By comparison, competitors like OpenAI, Google DeepMind, and Anthropic (which still operates as a Delaware PBC) all released documents showing what safety checks they ran before deploying new models.
While those companies have also been criticized for not doing enough, they at least said something. Elon’s company didn’t.
CNBC said it sent repeated requests to xAI in July and August asking for information about Grok 4’s safety testing. xAI ignored them. Then on August 20, nearly two months after Grok 4 launched, xAI quietly updated its “model card” to include a few lines about safety and testing. It was the first and only time the company acknowledged any kind of oversight.
Back in May, OpenAI responded to pressure from former employees and civic groups by announcing that its nonprofit board WOULD retain control of the company, even as it transitions into a PBC.
Get up to $30,050 in trading rewards when you join Bybit today