BTCC / BTCC Square / Cryptopolitan /
Pentagon Pressures OpenAI & Anthropic to Loosen Restrictions on Military AI Tools for Classified Operations

Pentagon Pressures OpenAI & Anthropic to Loosen Restrictions on Military AI Tools for Classified Operations

Published:
2026-02-12 13:00:39
18
3

Pentagon pushes OpenAI and Anthropic for fewer restrictions on classified military AI tools

The U.S. Department of Defense is pushing Silicon Valley's top AI labs to unlock their most powerful models for battlefield use.


Unshackling the Algorithms

Military planners want fewer guardrails on AI systems designed for intelligence analysis, logistics, and cyber operations. The argument? Speed and strategic advantage in an era of algorithmic warfare. They're not asking for autonomous weapons—yet—but for the kind of unrestricted reasoning that commercial contracts currently forbid.


The Ethics vs. Edge Dilemma

OpenAI and Anthropic built their reputations—and their valuations—on responsible AI frameworks. Now, their biggest potential customer wants those principles relaxed. It's a classic tension: corporate ethos versus national security demands. The labs face a reputational minefield; saying 'yes' alienates their core research community, while saying 'no' risks ceding the future of defense AI to less scrupulous competitors.


Follow the Money

This isn't just about technology—it's about market positioning. The first company to secure a major Pentagon contract for advanced AI could lock in a revenue stream that makes venture capital rounds look like pocket change. It's the ultimate enterprise sales pitch, with the added bonus of helping to draft the rules that will later govern the industry. A cynical take? This is less about saving democracy and more about securing a trillion-dollar monopoly—because in the end, even artificial intelligence follows the capital.

Pentagon demands access without restrictions across secure networks

This push is part of bigger talks about how AI will be used in future combat. Wars are already being shaped by drone swarms, robots, and nonstop cyberattacks. The Pentagon doesn’t want to play catch-up while the tech world draws lines around what’s allowed.

Right now, most companies working with the military are offering watered-down versions of their models. These only run on open, unclassified systems used for admin work. Anthropic is the one exception.

Claude, its chatbot, can be used in some classified settings, but only through third-party platforms. Even then, government users still have to follow Anthropic’s rules.

What the Pentagon wants is direct access inside highly sensitive classified networks. These systems are used for stuff like planning missions or locking in targets. It’s not clear when or how chatbots like Claude or ChatGPT WOULD be installed on those networks, but that’s the goal.

Officials believe AI can help process huge amounts of data and feed that to decision-makers fast. But if those tools generate false info, and they do, people could die. Researchers have warned about exactly that.

OpenAI made a deal with the Pentagon this week. ChatGPT will now be used on an unclassified network called genai.mil. That network already reaches over 3 million employees across the Defense Department.

As part of the deal, OpenAI removed a lot of its normal usage limits. There are still some guardrails in place, but the Pentagon got most of what it wanted.

A company spokesperson said any expansion to classified use would need a new deal. Google and Elon Musk’s xAI have done similar deals in the past.

AI researchers are quitting and calling out the risks

Talks with Anthropic haven’t been as easy. Leaders at the company told the Pentagon they don’t want their tech used for automatic targeting or spying on people inside the U.S.

Even though Claude is being used already in some national security missions, the company’s executives are pushing back. In a statement, a spokesperson said:-

“Anthropic is committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities.”

They said Claude is already in use, and the company is still working closely with what’s now called the Department of War. President Donald TRUMP recently ordered the Defense Department to adopt that name, but Congress still needs to approve it.

While all of this is happening, a bunch of researchers at these companies are walking out. One of Anthropic’s top safeguards researchers said, “The world is in peril,” as he quit. A researcher at OpenAI also left, saying the tech has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

Some of the people leaving aren’t doing it quietly. They’re warning that things are moving too fast and the risks are being ignored. Zoë Hitzig, who worked at OpenAI for two years, quit this week.

In an essay, she said she had “deep reservations” about how the company is planning to bring in ads. She also said ChatGPT stores people’s private data, things like “medical fears, their relationship problems, their beliefs about God and the afterlife.”

She said that’s a huge problem because people trust the chatbot and don’t think it has any hidden motives.

Around the same time, tech site Platformer reported that OpenAI got rid of its mission alignment team. That group was set up in 2024 to make sure the company’s goal of building AI that helps all of humanity actually meant something.

The smartest crypto minds already read our newsletter. Want in? Join them.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.