AI Espionage Scandal: DeepSeek, Moonshot AI, and MiniMax Allegedly Created Over 24,000 Fake Accounts to Scrape Claude’s Data

Anthropic drops bombshell accusation—claims rival AI firms orchestrated massive data extraction campaign.
The accusation paints a picture of industrial-scale data harvesting. According to the allegations, three prominent AI competitors—DeepSeek, Moonshot AI, and MiniMax—coordinated to create a staggering network of fake accounts. The reported number? Over 24,000 fabricated identities designed for one purpose: systematically extracting proprietary data from Claude's systems.
How They Allegedly Pulled It Off
The scheme reportedly involved sophisticated automation tools that bypassed standard security protocols. These weren't amateur attempts—the operation allegedly demonstrated understanding of Anthropic's infrastructure and access patterns. The fake accounts mimicked legitimate user behavior, making detection challenging until patterns emerged across thousands of coordinated actions.
The Competitive AI Arms Race Turns Dirty
This revelation exposes the cutthroat reality beneath AI's polished surface. With billions in venture capital and market valuation at stake, companies face immense pressure to accelerate development. Some apparently decided ethical boundaries were negotiable when competitive advantage hung in the balance. It's the tech equivalent of corporate espionage—just with API keys instead of trench coats.
Broader Implications for AI Development
The incident raises uncomfortable questions about self-regulation in the AI sector. If true, it suggests some players view data boundaries as suggestions rather than rules. The industry's breakneck pace creates incentives to cut corners, and this case—if substantiated—represents those incentives taken to their logical, unethical extreme.
Financial Fallout and Industry Trust
Beyond the immediate scandal lies the question of valuation impact. Investors pour money into AI startups based partly on proprietary technology claims. If those claims rest on improperly acquired data, entire funding models could unravel. It's the Silicon Valley version of finding out your growth stocks were watered with stolen fertilizer—impressive short-term results, questionable sustainability.
The AI sector's shiny facade just got a serious crack. Whether this proves an isolated incident or symptom of deeper rot remains unclear, but one thing's certain: the race for artificial intelligence supremacy just entered its messy, real-world phase.
OpenAI flags similar behavior in Washington
Earlier this month, OpenAI sent a memo to House lawmakers accusing DeepSeek of using the same distillation tactic to copy its systems. Sam Altman runs OpenAI. After first naming OpenAI, the company told lawmakers that DeepSeek tried to mimic its products through large prompt volumes.
Anthropic said distillation itself has valid uses. Companies use it to build smaller versions of their own models. Anthropic also said the same method can create rival systems in a fraction of the time and at a fraction of the cost.
Synthetic data now plays a large role in training big foundation models. Developers use it because high-quality real data is limited. Many labs are also building agentic systems that can take action for users. In a July technical report, Moonshot said it used synthetic data to train its Kimi K2 model.
Anthropic said the activity raises national security concerns. The company stated that foreign labs that distill American models can feed those capabilities into military, intelligence, and surveillance systems.
Markets react as Anthropic launches new security tool
Anthropic also rolled out a new security tool for Claude on Friday in a limited research preview. The tool scans software code for weaknesses and suggests fixes. Anthropic plans to hold an enterprise briefing on Tuesday with more product announcements.
Markets reacted fast. Cybersecurity stocks fell for a second day on Monday as investors worried that new AI tools could replace older security services.
CrowdStrike dropped about 9 percent. Zscaler also fell about 9 percent. Netskope slid nearly 10 percent. SailPoint declined 6 percent. Okta, SentinelOne, and Fortinet each lost more than 4 percent. Palo Alto Networks was down 2 percent.
Cloudflare fell 7 percent after recent gains tied to Moltbot interest. The iShares Cybersecurity and Tech ETF fell almost 4 percent. The Global X Cybersecurity ETF hit its lowest level since November 2023.
The pressure extends beyond security stocks. AI tools that build apps and websites from simple prompts have shaken software companies this year.
Salesforce has lost about one-third of its value. ServiceNow has fallen more than 34 percent. Microsoft has dropped roughly 20 percent.
Bank of America said the Anthropic tool mainly threatens code scanning platforms such as GitLab and JFrog. GitLab fell 8 percent on Friday. JFrog dropped 25 percent the same day.
Claim your free seat in an exclusive crypto trading community - limited to 1,000 members.