Musk’s xAI Pledges EU AI Safety Code Amid Tech Giant Rifts—Who’s Really Winning?
Elon's AI venture plays nice with Brussels—while rivals squirm under regulatory scrutiny.
Silicon Valley's New Cold War
As the EU tightens its grip, xAI's compliance looks less like virtue and more like chess. Meanwhile, legacy tech firms hemorrhage lobbying cash—classic 'disruption' theater.
The Cynic's Footnote
Funny how 'AI ethics' budgets balloon right before quarterly earnings calls. Almost as if morality is... monetizable.
xAI and other tech firms have mixed feelings about the code
In a post on X, xAI acknowledged that while the AI Act and Code promote safety, other parts are profoundly detrimental to innovation, and its copyright provisions are clearly overreaching.
The company, however, confirmed it will sign the safety and security chapter.
“xAI supports AI safety and will be signing the EU AI Act’s Code of Practice Chapter on Safety and Security.”
~ xAI.
The company did not say whether it intends to adopt the other two sections, transparency and copyright, of the code, both of which will apply to general-purpose AI providers under the upcoming regulation.
xAI’s stance adds to a growing divide among major AI developers over how to respond to the EU’s framework. Google, part of Alphabet, has committed to signing the full code of practice, despite expressing serious reservations about aspects of the rules.
“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” said Kent Walker, Google’s President of Global Affairs, in a blog post. But he added that recent changes to the code had improved it, and said Google would move ahead with signing.
In contrast, Meta has refused to sign. The Facebook parent says the code creates legal uncertainty and includes measures that extend far beyond what the AI Act requires.
The tech giant warned the framework could deter companies from developing foundational AI systems in Europe, describing the EU’s direction as “the wrong path on AI.”
Microsoft and OpenAI have not confirmed whether they will sign the code.
The EU is preparing the tech industry for AI Act enforcement
The EU’s AI code of practice is designed as a transitional tool, helping companies align with forthcoming AI laws due to come into force for high-impact models on 2 August. These rules target developers of so-called systemic risk AI models, such as those built by Google, Meta, Anthropic, and OpenAI.
Although not legally binding, the code outlines expectations around documentation, content sourcing, and responding to copyright claims. Firms that sign up are likely to benefit from smoother regulatory engagement and fewer legal uncertainties.
The broader EU AI Act, a sweeping piece of legislation, seeks to regulate AI based on risk levels. It bans certain uses outright, such as manipulative systems or social scoring, while imposing strict requirements on “high-risk” uses in fields like education, employment, and biometrics.
Developers of advanced models will need to carry out risk assessments, maintain transparency records, and comply with strict quality standards. Those who fall short could face fines of up to 7% of their global annual turnover.
The differing reactions from AI leaders suggest a rift is growing in how tech firms view regulation in the EU.
While some, like Google, are opting for strategic engagement, others such as Meta are pushing back, fearing the rules will stifle innovation.
xAI’s decision to selectively support parts of the code may represent a middle ground, acknowledging the importance of AI safety while resisting what it sees as overregulation.
As the EU presses ahead with its regulatory agenda, more tech companies will have to make a choice – cooperate early, or risk conflict later.
Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.