YouTube Purges 11,000+ State-Backed Propaganda Accounts in Massive Crackdown
YouTube just dropped the banhammer—hard. The platform axed over 11,000 accounts tied to government-backed influence campaigns. No more hiding behind algorithmic smoke and mirrors.
Propaganda purge in full swing
The takedowns hit coordinated networks spreading state-sponsored narratives. Think of it as digital spring cleaning—except with more geopolitical tension and fewer dust bunnies.
Platform plays whack-a-mole with bad actors
This isn’t YouTube’s first rodeo. The Google-owned service has been quietly ramping up enforcement, though some might argue it’s about as effective as a decentralized exchange’s KYC process.
One cynical footnote: At least these accounts weren’t shilling shitcoins.
YouTube has been cracking down on channels since 2022
YouTube first began blocking RT’s official channels in March 2022, soon after Russia’s forces moved into Ukraine.
These removals FORM the ongoing work by Google’s Threat Analysis Group to counter “coordinated influence operations” and global disinformation networks.
In its report, the company also noted that it dismantled propaganda campaigns originating from Azerbaijan, Turkey, Romania, Israel, Ghana, and Iran, all of which aimed at influencing political rivals.
Several of these operations focused on spreading competing storylines about the Palestine-Israel conflict, with each side promoting its own perspective.
“The findings from the most recent update are in line with our expectations of this regular and ongoing work,” said a spokesperson from YouTube.
Earlier this year, during the first quarter, Google had already removed upwards of 23k accounts for similar reasons.
Meta purged 10 million accounts recently
Across the broader pictures, Meta revealed last week that it had purged roughly 10 million accounts through mid‑2025. Those profiles were accused of masquerading as well-known creators, part of Meta’s effort to reduce “spammy” content and boost the authenticity of its Facebook and Instagram feeds.
Hence, Facebook also suspended about half a million accounts flagged for inauthentic or spammy behavior. The company said it demoted comments and limited the content distribution from the accounts to curtail their ability to monetize.
Meta defines unoriginal material as images or clips reused without giving credit to the originator. It now employs technology to detect videos that are duplicate, and restrict their reach.
This crackdown on fake and repetitive content coincides with Meta’s intensified investment in artificial intelligence.
On Monday, CEO Mark Zuckerberg announced plans to invest “hundreds of billions of dollars” into AI computing capacity, aiming to launch the company’s first AI supercluster next year.
Meanwhile, YouTube itself updated its ad‑eligibility rules this month to exclude videos that are mass‑produced or overly repetitive from earning revenue.
Some users misread the change as a ban on AI‑generated content, but YouTube clarified that the policy is designed solely to target unoriginal, spam-like videos rather than AI work in general.
Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now