Google Exposes North Korea, Iran, and China’s AI-Powered Cyberattack Surge

Nation-state hackers are weaponizing AI—and your firewall just became obsolete.
Google's Threat Analysis Group (TAG) dropped a bombshell report today: North Korean Lazarus Group, Iranian APT42, and Chinese state-linked actors are now using machine learning to supercharge phishing campaigns, zero-day exploits, and supply chain attacks. The AI arms race just went hot.
These aren't script kiddies—these are resource-rich adversaries automating social engineering at scale. Imagine deepfake voicemails from your 'CEO' demanding urgent wire transfers, or malware that evolves its code to bypass signature detection. Scary? You bet. Profitable? Ask the $3B stolen in crypto hacks last year.
While Fortune 500 CISOs scramble to update playbooks, one irony stings hardest: the same TensorFlow frameworks powering your fraud detection are now training the enemy's algorithms. Maybe next time, Wall Street will invest in security like they do in AI hype cycles.
North Korean state actors turn to AI
In its latest threat intelligence update, Google detailed how an Iranian group known as TEMP.Zagros, also known as MuddyWater, used Gemini to generate and debug malicious code disguised as academic research, with the end goal of developing custom malware.
In doing so, it inadvertently exposed key operational details that allowed Google to disrupt parts of its infrastructure.
China-linked actors were found using Gemini to improve phishing lures, perform reconnaissance on targeted networks, and research lateral movement techniques once inside compromised systems. In some cases, they misused Gemini to explore unfamiliar environments, such as cloud infrastructure, Kubernetes, and vSphere, indicating an effort to expand their technical reach.
North Korean operators, meanwhile, have been observed probing AI tools to enhance reconnaissance and phishing campaigns. One North Korean threat group known for its role in cryptocurrency theft campaigns leveraging social engineering also attempted to use Gemini to write code that would enable it to steal cryptocurrency.
Google was able to mitigate these attacks and close the accounts involved in them.
A new frontier for cyber defense
Anthropic’s report was released in August 2025 and provides supporting evidence of AI misuse by state-linked actors. The company found that North Korean operatives had used its Claude model to pose as remote software developers looking for jobs.
They reportedly used Claude to generate resumes, code samples, and answers to technical interviews to secure freelance contracts abroad.
While Anthropic’s findings brought to light the fraudulent action of using AI to get jobs, which WOULD have led to a bigger hacking operation in the hiring organizations, it also agrees with Google’s conclusion that AI tools are being systematically tested for extra advantage by bad actors.
The findings are a new headache for the global cybersecurity community. The same features that make AI models and applications powerful productivity tools are also being used to make potent instruments of harm, as the reports have shown, and as more advances are made, so will these attackers adapt, and their attacks will get more sophisticated.
Governments and technology companies are beginning to respond, and continued collaboration between all stakeholders to mitigate these actions will be the way forward.
Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.