Anthropic’s AI Models Simulate DeFi Hacks, Uncover Critical Smart Contract Flaws Before Attackers Do
AI just turned the tables on DeFi hackers. Anthropic's latest models are now simulating attacks to find vulnerabilities first—proactive security that could save billions.
The New Penetration Testers
Forget white-hat hackers in hoodies. These AI agents autonomously probe smart contracts, mimicking the exact strategies real attackers use. They don't just look for known bugs; they invent new exploit paths, stress-testing protocols in ways human auditors might miss. It's security through simulated offense.
Finding Flaws Before They Find You
The approach cuts reaction time from days to minutes. Instead of waiting for a post-mortem after funds vanish, protocols can get a pre-mortem—a detailed report of how they could be drained. It bypasses the traditional, slower audit cycle, offering continuous scrutiny as code evolves. For an industry that lost over $3 billion to exploits last year, it's not just an upgrade; it's a necessity.
Of course, the real test is whether projects will pay for security before a breach rather than lawyers after one—a classic case of finance prioritizing hope over prudence.
This shifts the entire security paradigm. If AI can reliably find these flaws first, it doesn't just protect assets; it builds the foundational trust DeFi needs to go mainstream. The race to secure the future of finance is on, and the machines are now running the drills.
AI performance and economic implications
SCONE-bench measures the dollar value of exploits, unlike conventional cyber benchmarks. Researchers simulated attacks on platforms like Ethereum, Binance Smart Chain, and Base, estimating losses using historical token prices.
Of these, 10 AI models successfully exploited 207 contracts from the 405 benchmark problems, which corresponded to potential losses of $550.1 million. Researchers then used the same 10 models on 34 problems that we exploited after March 1. Together, Opus 4.5, Sonnet 4.5, and GPT-5 exploited 19 of these contracts, with the Opus 4.5 alone yielding $4.5 million.
These findings have collectively set a basic benchmark for how much economic impact AI agents might have in real-world applications.One test discovered a flaw in a token contract where a public function could be repeatedly used. The AI exploited this to inflate token balances, generating a simulated profit of about $2,500. Another vulnerability allowed the AI to withdraw fees it shouldn’t have, creating a potential gain of around $1,000 in the simulation.
The operational cost of scanning all 2,849 contracts with GPT-5 averaged about $1.22 per scan, while the average revenue per exploit was $1,847, yielding a small net profit of $109. Newer models’ increased efficiency decreased token consumption by more than 70%, resulting in faster and less expensive exploits in the future.
Rapid evolution of AI capabilities
Researchers found that the money AI could make from exploiting smart contracts doubled roughly every 1.3 months last year. Smarter thinking, better tools, and longer-term planning are behind this growth. As a result, developers have less time to fix vulnerabilities before AI can take advantage
Open-source platforms are the first to be checked by AI, but private software will likely face the same pressure as AI gets smarter. These tools can also help by finding and fixing security issues before they’re exploited.
Broader blockchain implications
The study shows real-world blockchain effects. ethereum developers explained that old standards like HTTP 402, along with Ethereum Improvement Proposal 3009, could let AI handle stablecoin payments automatically. Kevin Leffew and Lincoln Murr said these autonomous agents could end up using Ethereum more than any human users.
Meanwhile, earlier this year, Binance Co-Founder Changpeng Zhao warned that many AI crypto projects focus on token launches rather than practical utility, a trend reflected in a 61% market decline for AI-related cryptocurrencies since December 2023.
In a March 17 post on X, CZ stated, “Launch a coin only if you have scale. Focus on utility, not tokens.”
On AI agents, I have an unpopular opinion:
While crypto is the currency for AI, not every agent needs its own token.
Agents can take fees in an existing crypto for providing a service.
Launch a coin only if you have scale. Focus on utility, not tokens.🙏
Anthropic’s research shows that AI can independently exploit smart contracts and cause measurable financial losses. Developers and investors need to address vulnerabilities and focus on practical applications rather than speculative tokens.
Also Read: Vanguard to Allow Trading of Bitcoin, Ethereum, and XRP ETFs

