BREAKING: US Intelligence Defies Pentagon, Deploys Anthropic’s Banned Mythos AI for Cybersecurity Operations

The U.S. intelligence community is actively deploying Anthropic's blacklisted Mythos AI model for critical cybersecurity operations, directly contravening a formal Pentagon ban issued just months ago. This clandestine adoption by agencies including the National Security Agency (NSA) reveals a stark and escalating internal government rift over balancing cutting-edge AI capabilities with mandated security safeguards, prioritizing offensive and defensive cyber tools over declared supply-chain vulnerabilities.
White House officials met with Amodei to discuss Mythos use in government operations
Only about 40 vetted organizations are permitted to use Mythos. Of the 40 groups, only 12 are publicly known, and the NSA is reportedly tucked away in the unlisted majority. In the U.K., agencies similar to the NSA also noted they have access to the model through their national AI Security Institute.
Anthropic describes Mythos as remarkably powerful in cybersecurity, capable of spotting deeply embedded bugs and independently exploiting them. This combination of advanced detection and autonomous analysis has already raised both interest and concern among policymakers.
On Friday, Amodei met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent to discuss how the model can be safely integrated into government infrastructure. Despite the White House’s public friction with Anthropic, this meeting highlights that the model’s power is simply too valuable for federal security needs to pass up.
Both sides characterized the talks as productive. The White House even shared, “We discussed opportunities for collaboration, as well as shared approaches and protocols to address the challenges associated with scaling this technology.”
Anthropic filed a lawsuit to counter the supply chain risk designation
Anthropic fired back at the Pentagon in March, filing a lawsuit to overturn the supply chain risk designation that threatened its government contracts. This was the first time the “not secure enough” tag has been pinned on a domestic provider, effectively barring their tools from standard government use.
Anthropic’s legal team has branded the “risk” designation a revenge tactic after Amodei denied the DoD’s request to integrate the AI into fully autonomous weapons systems and mass domestic surveillance.
As earlier reported by Cryptopolitan, a California district court judge sided with Anthropic and temporarily blocked the “supply chain risk” label. Still, a federal appeals court has since overturned that stay, keeping the designation in place for now.
In the early days of the blacklist efforts, President Donald Trump had claimed that the radical leftists running the firm were trying to dictate terms to the Defense Department. He argued, “We don’t need it, we don’t want it, and will not do business with them again!”
At the moment, some in the DoD still believe Anthropic’s refusal to cooperate fully proves they would unplug their tech during a war, making them a flight risk in combat. However, other administration officials are eager to bury the hatchet just to get their hands on the company’s superior tech.
Still letting the bank keep the best part? Watch our free video on being your own bank.
Log in to Reply
Log in to comment your thoughtsComments
Related Articles
|Square
Get the BTCC app to start your crypto journey
Get started today Scan to join our 100M+ users