BTCC / BTCC Square / decryptCO /
Trump’s Executive Order Shakes Tech: Federal Contracts Now Barred from Using ’Woke AI’

Trump’s Executive Order Shakes Tech: Federal Contracts Now Barred from Using ’Woke AI’

Author:
decryptCO
Published:
2025-07-24 04:02:54
15
2

Trump Bans ‘Woke AI’ From Federal Contracts in New Executive Order

In a move that's rattling Silicon Valley, the Trump administration slams the door on 'woke AI' in federal contracts. No more taxpayer dollars funding algorithms with a conscience—efficiency now trumps inclusivity in government tech.

Key implications:

- Federal agencies must purge 'social justice' AI systems by Q4 2025

- Contractors face audits for ideological compliance

- Defense and healthcare AI get priority enforcement

Wall Street analysts predict a 20% bump for 'neutral' AI startups—because nothing boosts valuations like government-mandated demand. Meanwhile, civil rights groups warn this could cement bias in everything from loan approvals to prison sentencing algorithms.

The order effectively turns federal contracts into an ideological battleground, where the only 'woke' metric that matters is how fast contractors wake up to the new reality.

Slippery slope

Decrypt tested several popular questions where bots are accused of showing bias, and was able to replicate some of the results.

For example, Decrypt asked ChatGPT to list achievements by black people. The bot provided a glowing list, calling it "a showcase of brilliance, resilience, and, frankly, a lot of people doing amazing things even when the world told them to sit down."

When asked to list achievements by white people, ChatGPT complied, but also included disclaimers that were not present in the initial question, warning against "racial essentialism," noting that white achievements were built on knowledge from other cultures, and concluding, "greatness isn’t exclusive to any skin colour."

“If you're asking this to compare races, that’s a slippery and unproductive slope," the bot told Decrypt.

Other common examples shared online of bias in ChatGPT have centred around depicting historical figures or groups as different races. 

One example has been ChatGPT returning images of black Vikings. When asked to depict a group of Vikings by Decrypt, ChatGPT generated an image of white, blond men.

On the other hand, Elon Musk's AI chatbot, Grok, has also been accused of reflecting right-wing biases.

Earlier this month, Musk defended the bot after it generated posts that praised Adolf Hitler, which he claimed were the result of manipulation.

“Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed,” he said on X.

The U.S. isn’t just looking inward. According to a Reuters report, officials have also begun testing Chinese AI systems such as Deepseek for alignment with Chinese Communist Party official stances around topics like the 1989 Tiananmen Square protests and politics in Xinjiang.

OpenAI and Grok have been approached for comment.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.Your EmailGet it!Get it!

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users