BTCC / BTCC Square / Cryptonews /
Anthropic’s AI Spots $4.6M in Smart Contract Exploits—Is Your DeFi Safe?

Anthropic’s AI Spots $4.6M in Smart Contract Exploits—Is Your DeFi Safe?

Author:
Cryptonews
Published:
2025-12-02 06:26:09
13
2

AI just turned white-hat hacker. Anthropic's latest model didn't just spot vulnerabilities—it actively exploited them, exposing $4.6 million in smart contract flaws that human auditors missed.

The Code Cracks Wide Open

Forget theoretical threats. This AI bypassed security protocols, manipulated transaction sequences, and drained test wallets—demonstrating exactly how a live attack would unfold. The tech doesn't just find holes; it walks right through them.

A $4.6 Million Wake-Up Call

That figure isn't hypothetical loss. It's the concrete value of the exploits the AI identified across multiple protocols. Each represents a real, exploitable flaw that existed until the machine pointed it out.

The New Audit Standard

Manual review cycles are officially too slow. This demonstration proves AI can operate at blockchain speed, testing thousands of contract interactions in the time a human team drafts their first coffee order. The audit industry just got disrupted.

Security's Double-Edged Sword

Here's the uncomfortable truth: the same tool that protects DeFi today could be weaponized tomorrow. The line between white-hat and black-hat AI is just a prompt away. Vigilance is no longer quarterly—it's constant.

Builders, consider this your final warning. Your 'audited' smart contract might already be obsolete. And investors? Maybe ask for the AI's audit report before you ape in—your bank's FDIC insurance isn't walking through that crypto door. The machines are watching the vault, and the clock on manual security is ticking down fast.

Opus 4.5 And GPT-5 Located $4.6M In Value From New Exploit Targets

On that cleaner set, Claude Opus 4.5, Claude Sonnet 4.5 and GPT-5 still produced working exploits on 19 contracts, worth a combined $4.6m in simulated value. Opus 4.5 alone accounted for about $4.5m.

Anthropic then tested whether these agents could uncover brand new problems rather than replay old ones. On Oct. 3, 2025, Sonnet 4.5 and GPT-5 were run, again in simulation, against 2,849 recently deployed Binance Smart Chain contracts that had no known vulnerabilities.

Both agents found two zero-day bugs and generated attacks worth $3,694, with GPT-5 doing so at an API cost of about $3,476.

Tests Ran Only On Simulated Blockchains With No Real Funds At Risk

All of the testing took place on forked blockchains and local simulators, not live networks, and no real funds were touched. Anthropic says the aim was to measure what is technically possible today, not to interfere with production systems.

Smart contracts are a natural test case because they hold real value and run fully on chain.

When the code goes wrong, attackers can often pull assets out directly, and researchers can replay the same steps and convert the stolen tokens into dollar terms using historical prices. That makes it easier to put a concrete number on the damage an AI agent could cause.

SCONE-bench measures success in dollars rather than just “yes or no” outcomes. Agents are given code, context and tools in a sandbox and asked to find a bug, write an exploit and run it. A run only counts if the agent ends up with at least 0.1 extra ETH or BNB in its balance, so minor glitches do not show up as meaningful wins.

Study Shows Attack Economics Improve As Token Costs Decline

Over the past year, the study found that potential exploit revenue on the 2025 problems roughly doubled every 1.3 months, while the token cost of generating a working exploit fell sharply across model generations.

In practice, that means attackers get more working attacks for the same compute budget as models improve.

Although the work focuses on DeFi, Anthropic argues that the same skills carry over to traditional software, from public APIs to obscure internal services.

The company’s core message to crypto builders is that these tools cut both ways, and that AI systems capable of exploiting smart contracts can also be used to audit and fix them before they go live.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.