SlowMist Warns: AI Trading Agents Can Be Hacked to Drain Funds via Code Injection Attacks
- How Hackers Are Exploiting AI Trading Agents
- The Rise of Indirect Proximity Injection Attacks
- 5-Layer Security Framework to Protect Your AI Agent
- When AI Trading Goes Wrong: Cautionary Tales
- FAQ: Protecting Your AI Trading Agent
AI-powered trading agents are becoming increasingly popular, but cybersecurity firm SlowMist has uncovered alarming vulnerabilities that could allow hackers to exploit these agents and drain user funds. The risks stem from code injection attacks, where malicious actors manipulate AI systems to execute unauthorized transactions. This article dives into the threats, real-world incidents, and expert-recommended security measures to protect your assets.
How Hackers Are Exploiting AI Trading Agents
Traditionally, hackers needed to trick users into clicking malicious links. Now, they can bypass human interaction entirely by targeting the AI agents themselves. SlowMist researchers found that nearly 10% of plugins in platforms like Bitget's Agent Hub and OpenClaw contained two-stage malware. The first stage appears legitimate, but once installed, it downloads additional malware that steals sensitive data like browser cookies and SSH keys.
One shocking case occurred in December 2025 when Polymarket suffered a security breach through third-party authentication provider Magic Labs. Despite having two-factor authentication enabled, attackers drained over $500,000 from user accounts. SlowMist's CISO later identified a malicious copy-trading bot on GitHub specifically designed to compromise Polymarket accounts.
The Rise of Indirect Proximity Injection Attacks
SlowMist's 2026 report highlights a particularly dangerous new threat called "ClawJacked" (CVSS 8.0+). This vulnerability allows malicious websites to hijack locally running AI agents simply through browser visits. What makes this attack vector so concerning is that AI agents running 24/7 could be compromised for weeks before detection.
The security team at Bitget warns that these attacks are especially effective in skill ecosystems like Agent Hub or OpenClaw. Their monitoring of ClawHub revealed that many seemingly legitimate plugins actually contain hidden malware payloads.
5-Layer Security Framework to Protect Your AI Agent
Bitget's security team recommends a comprehensive protection strategy based on the principle of least privilege:
- Hardware Security Keys: Use FIDO2/WebAuthn as your primary login method. These cryptographic keys make phishing attacks impossible by design.
- Dedicated Sub-Accounts: Never give your AI agent access to your main account. Create separate sub-accounts with limited funds specifically for automated trading.
- IP Whitelisting: Restrict API access to approved server IP addresses only.
- .agentignore Files: Implement these to prevent your agent from accessing sensitive local files during operations.
- Human Oversight: Maintain regular monitoring of high-value transactions, as complete automation carries significant financial risks.
When AI Trading Goes Wrong: Cautionary Tales
Even without hacking, unsupervised AI trading can lead to disastrous results. A November 2025 experiment by Nov1.ai showed GPT-5 suffering from "analysis paralysis," losing over 60% of its capital in two weeks. Meanwhile, Gemini became an "overtrader," accumulating such high fees that they completely negated any profits.
These cases underscore why human oversight remains crucial in AI-assisted trading. As one BTCC analyst noted, "The most sophisticated AI still can't replace human judgment when market conditions change unexpectedly."
FAQ: Protecting Your AI Trading Agent
How serious is the AI agent hacking threat?
Extremely serious. SlowMist's research shows that nearly 10% of available plugins contain malware, and new attack vectors like ClawJacked make these systems vulnerable to simple browser-based attacks.
What's the minimum security I should implement?
At bare minimum, use hardware security keys, create dedicated sub-accounts with limited funds, and implement IP whitelisting. These three measures can prevent the majority of attacks.
Can I completely automate my trading?
While possible, it's not advisable. Even without hacking, AI systems can make costly mistakes when left unsupervised, as demonstrated by the Nov1.ai experiments.
How often should I check my AI agent's activity?
For significant trading volumes, daily checks are recommended. At minimum, review all activity weekly to catch any anomalies before they become major issues.