Deepfake Crypto Scams Evolve: Now Smarter, Nearly Undetectable
Fake Elon just shilled another memecoin—and wallets got drained faster than a DeFi exploit.
How deepfakes are hijacking crypto
AI-generated CEOs 'announcing' token burns. Cloned influencers pushing phishing links. These aren't crude scams—they're algorithmic con artists trained on hours of real footage. Security firms report a 300% spike in deepfake fraud since 2023, with losses crossing $2B this year alone.
The new playbook
Scammers now bypass 2FA by cloning voices for SIM swaps. They hijack verified Twitter accounts to livestream fake giveaways. One recent fake Michael Saylor video even passed manual review on a Tier 1 exchange—until the 'BTC bonus' turned out to be a drainer.
Why crypto is the perfect target
Pseudonymous teams. Irreversible transactions. A culture that worships anonymous founders. It's like Wall Street's boiler rooms got an AI upgrade—except here, the SEC won't even pretend to investigate.
The arms race begins
Chainalysis rolled out deepfake-detection nodes. Ledger added AI voiceprint verification. But as one white-hat hacker told us: 'The tech's advancing faster than compliance departments can say 'rug pull'.'
AI is Making Crypto Scams Easier Than Ever Before
With the heightened level of crypto crime right now, sophisticated fraud and scams are on the rise. Criminals are already employing enhanced social engineering to defraud users, but AI solutions have opened a new attack vector.
Specifically, hackers are using AI deepfakes to run these scams, managing to defraud prominent targets.
A friend just suffered a $2m+ hack due to sophisticated social engineering
They impersonated @pauliepunt of Plasma using apparently AI generated audio which perfectly matched his profile offering an advisory role at Plasma.
During the pitch which perfectly described the…
More specifically, the hacker impersonated the CEO of Plasma, a layer-1 blockchain for stablecoin, which became notably popular in recent months due to its $500 million ICO.
Deepfakes have already been a huge vector for crypto crime, with $200 million losses in Q1 2025 alone. Thanks to the rather unprecedented pace of AI development recently, these deepfakes are more accessible than ever, enabling crypto scams with a very low barrier to entry.
The Criminal Applications for AI are Increasing
Even where deepfakes aren’t involved, AI has nonetheless been powering plenty of crypto scams. The low-skill nature of these new strategies is particularly appealing to criminals. Case in point, cybersecurity watchdogs recently identified a new wallet drainer published by the anonymous “Kodane.” It may have been entirely produced by AI.
When examining the code, security experts claimed that this malware seemed technically proficient, albeit obviously AI-generated. Nonetheless, the human scammers weren’t quite so capable, naming the program “ENHANCED STEALTH WALLET DRAINER.” The criminal’s obvious ineptitude highlights the real danger in this new attack vector.
A Look on the Bright Side
So, between AI malware and new deepfakes, is there any way for crypto users to avoid scams? As a recent programming event displayed, these programs are much more proficient at offense than defense.
In a recent open call to hack AI agents for cash prizes, the protocols’ defenses were shockingly poor:
We deployed 44 AI agents and offered the internet $170K to attack them.
1.8M attempts, 62K breaches, including data leakage and financial loss.Concerningly, the same exploits transfer to live production agents… (example: exfiltrating emails through calendar event)
pic.twitter.com/t1mb5Ix32a
Between sectors like shopping, travel, healthcare, and more, the most secure agent was nonetheless penetrated by 1.5% of hacking attacks.
Many identified vulnerabilities were universal and transferable, cutting through all agents regardless of their base model. Those kinds of results WOULD be catastrophic to a real business.
All that is to say, there’s a clear and present reason to keep human developers on one side of this battle. Even if crypto scams employ cheap AI malware and deepfakes, they’ll go up against human developers. If that happens, dedicated security personnel should be able to foil them.
In the aforementioned Plasma case, pre-existing countermeasures nearly stopped the attack. The malware only got through after the victim tried to download it twice.
In other words, a Web3 business with a human security team should be safe. Crypto users remain vulnerable mostly on an individual basis.