World’s First AI War Exposed: Targeting Errors, Disinformation, and Critical Accountability Gaps in Modern Combat

A stark warning emerges from the frontlines of the world's first AI-driven conflict, revealing systemic targeting errors and a critical accountability gap. In a 24-hour offensive, US forces struck approximately 1,000 sites—a rate of 42 per hour—leveraging the 'Maven' Smart System, which enabled a 20-person team to perform tasks previously requiring 2,000 personnel. By late 2024, the integration of a large language model, akin to consumer AI chatbots, marked a historic first for targeting technology in warfare. The US military is now investigating a controversial strike in Minab, amid reports the AI system may have operated on outdated intelligence, raising urgent questions about algorithmic accountability and the veracity of data driving lethal decisions.
Who is responsible when the AI gets it wrong?
Emilia Probasco, a former Navy officer and senior fellow at the Center for Security and Emerging Technology, said in The Four Cast podcast that the responsibility falls on the commander who gave the order. That is how the military works. She said the black box problem, not being able to see how an AI system reached its answer, is “an ongoing area of research, not a solved one.”
Before the war, Anthropic, the company whose technology sits inside Maven, got into a contract dispute with the Defense Department over two things: whether AI is reliable enough for life-or-death calls, and whether using AI to connect scattered data points turns it into a mass surveillance tool.
Probasco said both concerns hold up, but noted “the awkwardness of a private company drawing lines around how a military conducts its operations.”
Holland Michel said the conversation keeps drifting toward the worst-case picture. A machine that picks targets and fires with no human involved. That risk is real, he said, but it is not what is happening now.
“The harder, more immediate work,” he said, “is making AI systems more transparent and ensuring that humans who rely on their outputs are making genuinely informed decisions, not simply deferring to whatever the machine suggests.”
AI generated war content was also spreading fast online
BBC Verify tracked AI-made videos and doctored satellite images about the conflict that pulled in hundreds of millions of views.
Timothy Graham, a digital media researcher at the Queensland University of Technology, said: “The scale is truly alarming and this war has made it impossible to ignore now.” He added, “What used to require professional video production can now be done in minutes with AI tools. The barrier to creating convincing synthetic conflict footage has essentially collapsed.”
X said it would cut creators from its payment scheme if they posted AI-made war footage without a label. Mahsa Alimardani, a researcher at the Oxford Internet Institute who covers Iran, called it “a notable signal that they’ve noticed that this is a big problem.” Meta and TikTok did not reply when asked if they planned to do the same.
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.