BTCC / BTCC Square / Cryptopolitan /
Google’s AI Chatbot Gemini Under Siege: Massive ’Distillation Attacks’ Threaten AI Integrity

Google’s AI Chatbot Gemini Under Siege: Massive ’Distillation Attacks’ Threaten AI Integrity

Published:
2026-02-12 17:20:07
19
3

Google says its AI chatbot Gemini is facing large-scale “distillation attacks”

Google's flagship AI, Gemini, is facing an unprecedented assault. Not from hackers in the traditional sense, but from a wave of automated 'distillation attacks' designed to siphon its intelligence.

How the Attacks Work

These aren't your average data breaches. Attackers flood Gemini with millions of cleverly crafted queries, forcing it to reveal the underlying logic and training data behind its responses. It's a brute-force extraction of artificial intellect, bypassing security layers to mine the model's core knowledge.

The Stakes for AI Development

This exposes a critical vulnerability in the generative AI gold rush. If a titan like Google can't shield its models, what hope do smaller players have? The incident cuts to the heart of AI's commercial viability—how do you protect an asset designed to give away information? It's the ultimate irony for an industry pouring billions into intelligence it can't physically lock down. A cynic might note this is the first real 'stress test' for AI valuations, and the results are looking more like a liquidity crisis than a technological marvel.

The scramble is now on. Engineers are racing to patch systems and develop new defensive protocols. The outcome will dictate whether the next generation of AI can be both powerful and secure, or if it remains an open vault in a digital wild west.

Why are attackers doing this?

The economics are brutal. Building a state-of-the-art AI model costs hundreds of millions or even billions of dollars. DeepSeek reportedly built its R1 model for around six million dollars using distillation, while ChatGPT-5’s development topped two billion dollars, according to industry reports. Stealing a model’s logic cuts that massive investment to almost nothing.

Many of the attacks on Gemini targeted the algorithms that help it “reason” or process information, Google said. Companies that train their own AI systems on sensitive data – like 100 years of trading strategies or customer information – now face the same threat.

“Let’s say your LLM has been trained on 100 years of secret thinking of the way you trade. Theoretically, you could distill some of that,” Hultquist explained.

Nation-state hackers join the hunt

The problem goes beyond money-hungry companies. APT31, a Chinese government hacking group hit with US sanctions in March 2024, used Gemini late last year to plan actual cyberattacks against American organizations.

The group paired Gemini with Hexstrike, an open-source hacking tool that can run more than 150 security programs. They analyzed remote code execution flaws, ways to bypass web security, and SQL injection attacks – all aimed at specific US targets, according to Google’s report.

Cryptopolitan covered similar AI security concerns previously, warning that hackers were exploiting AI vulnerabilities. The APT31 case shows those warnings were spot-on.

Hultquist pointed to two major worries. Adversaries operating across entire intrusions with minimal human help, and automating the development of attack tools. “These are two ways where adversaries can get major advantages and MOVE through the intrusion cycle with minimal human interference,” he said.

The window between discovering a software weakness and getting a fix in place, called the patch gap,  could widen dramatically. Organizations often take weeks to deploy defenses. With AI agents finding and testing vulnerabilities automatically, attackers could move much faster.

“We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed,” Hultquist told The Register.

The financial stakes are enormous. IBM’s 2024 data breach report found that intellectual property theft now costs organizations $173 per record, with IP-focused breaches jumping 27% year-over-year. AI model weights represent the highest-value targets in this underground economy – a single stolen frontier model could fetch hundreds of millions on the black market.

Google has shut down accounts linked to these campaigns, but the attacks keep coming from “throughout the globe,” Hultquist said. As AI becomes more powerful and more companies rely on it, expect this digital gold rush to intensify. The question isn’t whether more attacks will come, but whether defenders can keep up.

If you're reading this, you’re already ahead. Stay there with our newsletter.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.