BTCC / BTCC Square / Cryptopolitan /
Nation-State Hackers from Russia, Iran, North Korea, and China: AI Use Limited to Basic Tasks

Nation-State Hackers from Russia, Iran, North Korea, and China: AI Use Limited to Basic Tasks

Published:
2026-02-25 18:55:08
12
3

Nation-state hackers from Russia, Iran, North Korea, and China are using AI for basic tasks only

Forget the sci-fi hype—state-backed cyber operatives are using AI for grunt work, not digital domination.

The Mundane Reality of Malicious AI

Intelligence reports reveal a surprising gap between fear and function. Advanced persistent threat (APT) groups linked to major geopolitical players aren't building Skynet. They're automating the boring stuff: crafting more convincing phishing lures, sifting through stolen data faster, and managing basic infrastructure. It's a productivity tool for espionage, not a weapon of mass disruption.

Why the Sophistication Ceiling?

Attribution risks, operational security, and sheer complexity put a lid on ambitions. Deploying cutting-edge, bespoke AI models leaves fingerprints and requires stable, high-resource environments—hard to come by when you're dodging sanctions or operating from a bunker. So they stick with proven, off-the-shelf tools. It's the cyber equivalent of using a power drill instead of inventing one.

The Finance Angle: All Hype, No ROI (Yet)

This slow-burn adoption mirrors the crypto space's own struggle with AI narratives—endless promises of revolutionary trading bots and autonomous DAOs, yet most projects just slap a chatbot on a dashboard and call it innovation. The real disruptive force isn't a flashy AI hack; it's the slow, steady erosion of trust in legacy systems that cryptocurrencies already exploit. Maybe the hackers are smarter than the VCs—they're not betting the farm on unproven tech.

The takeaway? The digital cold war is being fought with upgraded tools, not new rulebooks. And in both hacking and high finance, the biggest threat often isn't a technological leap, but the efficient application of old tricks at a new scale.

Fake obituaries and forged documents part of harassment campaign

They created a fake obituary and gravestone photos to spread false rumors about one dissident’s death. These rumors actually showed up online in 2023, a Chinese-language Voice of America article confirmed. Ben Nimmo, who leads investigations at OpenAI, called the effort industrialized harassment aimed at critics of the Chinese Communist Party through multiple channels.

Using ChatGPT as a record-keeping tool ended up exposing the operation. ChatGPT worked as a journal for the operative to track the covert network, while other tools generated most of the actual content that got spread through social media. OpenAI banned the user after finding the activity.

OpenAI investigators matched descriptions from the ChatGPT user with real online activity. The user described faking a Chinese dissident’s death by creating a phony obituary and gravestone photos for posting online.

In another case, the ChatGPT user asked the system to create a plan for damaging the reputation of incoming Japanese Prime Minister Sanae Takaichi by stirring up anger over American tariffs. ChatGPT refused. But in late October, when Takaichi took power, hashtags showed up on a popular forum for Japanese graphic artists attacking her and complaining about tariffs.

The OpenAI report also covered several scam operations from Cambodia that used the platform for romance and investment fraud, plus influence campaigns linked to Russia targeting Argentina and Africa.

Microsoft report shows similar basic usage patterns

Microsoft published a separate report jointly with OpenAI looking at how nation-state actors from Russia, North Korea, Iran, and China are trying out large language models to support cyber attack operations. Both companies shut down efforts by five state-affiliated actors by closing their accounts.

The Microsoft report found these actors mainly wanted to use services for simple jobs like searching publicly available information, translating content, fixing coding errors, and running basic programming tasks. No major or new attacks using the models have been found so far.

This gap between fear and reality happens during tough competition between Washington and Beijing over control of this technology. What role it plays in military and economic matters has become a major fight. The Pentagon recently told another company, Anthropic, it has until Friday to remove certain safety features from its model or risk losing a defense contract.

Microsoft said it’s working on principles to lower risks from bad use of these tools by nation-state groups and criminal organizations. These principles include finding and stopping bad users, telling other service providers, working with other groups, and being transparent.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.