SEC Chair Atkins Declares War: Deploying AI to Hunt Down Criminals Who Abuse AI

The financial watchdog is turning the criminals' own weapon against them.
### The Regulator's New Algorithmic Enforcer
SEC Chair Eleanor Atkins just laid down a new doctrine for the digital age. In a stark warning to Wall Street and crypto's shadowy corners, she announced the Commission's most aggressive tech pivot yet: deploying advanced artificial intelligence to identify, track, and dismantle networks that misuse AI for fraud and market manipulation.
It's a classic case of fighting fire with a more sophisticated, government-funded fire. The strategy aims to parse through mountains of trading data, communications, and complex transaction chains—tasks human teams could never scale. Think pattern recognition that spots the synthetic whisper of a pump-and-dump scheme or the digital fingerprints of AI-generated disinformation campaigns designed to move markets.
### The Arms Race Goes Mainstream
This isn't just about chasing bad actors; it's an admission. The cat's out of the bag—sophisticated AI tools are already in the wild, being weaponized for financial crime. Atkins' vow signals that the regulatory battlefield has permanently shifted. Compliance can't be manual anymore. Surveillance must be predictive, not reactive.
The move targets everything from algorithmic wash trading in crypto to AI-forged documents in traditional finance. The subtext is clear: if you're using a neural network to cheat, the SEC is training a bigger one to catch you. It’s a daunting proposition, raising immediate questions about oversight of the overseer's algorithms and the privacy of legitimate market data.
### A Cynical Take from Finance
Of course, the real test will be if this shiny new AI enforcer ever goes after the big banks' opaque algos—or if it just becomes another expensive tool to hassle retail crypto traders while the old-guard quants get a polite nod. Some might call that a feature, not a bug.
The era of AI-vs-AI financial regulation is here. Whether it makes markets safer or just creates a more expensive, automated cat-and-mouse game remains to be seen. One thing's certain: the criminals just lost their tech advantage.
How is the SEC using AI to protect investors and catch fraudsters?
The Securities and Exchange Commission (SEC), under the leadership of Chairman Paul S. Atkins is implementing a strategy to “fight AI with AI.” The initiative is centered around the SEC’s AI Task Force, which was established to give the entire agency access to technological advances and ensure that the commission keeps pace with the rapid evolution of the private financial sector.
The Commission is using algorithms to detect market misconduct, including fraud and manipulative trading schemes. These tools can find anomalies in trading volume or price movements with greater speed and precision than traditional methods.
AI also helps the agency’s staff identify material omissions or misleading statements in documents filed by thousands of public companies more efficiently, allowing the SEC to react to public input and market changes in real-time.
Chairman Atkins has noted that the SEC’s objective remains to protect investors regardless of the tools it uses. This time, the agency is specifically looking out for signs of “AI washing.” This term is used to describe companies that make false, exaggerated, or misleading claims about their use of artificial intelligence to boost their stock price or attract investors.
What are the risks of using AI in government regulation?
One of the primary concerns regarding AI in government is the potential for black box decision-making, where an algorithm makes a choice without a clear, human-understandable reason. Chairman Atkins clarified that human interaction is necessary at every stage of the SEC’s risk assessment program.
“Due process demands it,” Atkins noted during a recent Financial Stability Oversight Council (FSOC) roundtable. An algorithm might identify a suspicious pattern or an anomaly, but it lacks the ability to determine the credibility of a witness or assess the intent of a market participant. Consequently, the final judgment remains with the Commissioners and professional staff.
Leading AI developers, including Google (Gemini), OpenAI, and Anthropic, have previously released reports detailing how malicious entities are exploiting their platforms. For example, OpenAI recently reported on disrupting state-sponsored threat actors who used AI to research vulnerabilities and generate phishing content. Similarly, Google’s Threat Analysis Group has tracked the use of Large Language Models (LLMs) in social engineering attacks designed to steal financial credentials.
The SEC will compel companies to disclose AI-related information if there is a substantial likelihood that a reasonable shareholder would find it important for an investment decision.
In early 2024, the Commission settled charges against two investment advisers for making false and misleading statements about their use of AI. In those cases, the firms claimed to use AI to analyze millions of data points to predict market moves, but the SEC found those claims to be false.
If you're reading this, you’re already ahead. Stay there with our newsletter.