Menlo Security Reveals How GenAI Tools Are Reshaping Work—and Risk

Menlo Security GenAI phishing deepfake risk

Generative AI is reshaping how people work. In January 2025 alone, Menlo Security recorded 10.53 billion visits to AI-related websites. That’s a 50% jump from just 11 months earlier. And most of that activity—an estimated 80%—is happening right inside the browser.

The browser’s dominance makes sense. It’s universal, always on, and already wired into the way people interact with cloud apps and services. As GenAI tools like ChatGPT, Copilot, and Gemini become more deeply embedded into daily workflows, employees are using them for everything from drafting emails to rewriting code to summarizing documents. These tools are fast becoming the interface for getting work done.

That shift has also made the browser a new front line in enterprise security. What was once a neutral space for web access is now the default gateway to GenAI. And that’s given attackers a much larger—and more trusted—surface to exploit.

Phishing 2.0: AI as a Threat Vector

Menlo Security reports a 130% year-over-year increase in zero-hour phishing attacks, many of them powered by generative AI. These aren’t sloppy, typo-riddled emails anymore. They're sharp, convincing, and often built using real data scraped from public sources.

One tactic gaining steam is the use of fake GenAI sites. In the past year, Menlo identified nearly 600 impersonation domains, many trading on trusted names like ChatGPT, Claude, Copilot, and Gemini. Some of these sites look legitimate enough to fool even seasoned users. Others are weaponized browser extensions that steal credentials or inject malicious code under the guise of boosting productivity.

Deepfakes add another layer of deception. AI-generated video and audio can now mimic real employees with alarming accuracy. In one high-profile incident, a finance employee authorized a $25 million transfer after participating in a video call with what appeared to be company executives. They were all fakes.

“The situation will only worsen as GenAI models improve and attackers start sharing capabilities and establishing Phishing-as-a-Service models to make money,” said Krishna Vishnubhotla, Vice President, Product Strategy at Zimperium. “Organizations need to adopt real-time, AI-driven mobile security to detect and block phishing before users are compromised.”

Shadow AI: A Quiet Insider Risk

Not every risk comes from outside the organization. Some of the most damaging exposures start with well-meaning employees using AI on their own. According to a TELUS Digital survey cited in the Menlo Security report, 68% of enterprise workers use generative AI through personal accounts. More than half—57%—admit they’ve pasted sensitive company data into these tools.

“With Shadow AI, individual workers may decide to use it without telling anyone and may even hide their use from their coworkers,” said Kris Bondi, CEO and Co-Founder of Mimoto. “Its stealth usage adds to the risk associated with it.

That’s a problem. When users operate outside sanctioned platforms, there’s no visibility, no control, and no guarantee that corporate data won’t be ingested, stored, or exposed. Many popular GenAI tools—especially free-tier versions—retain user input to train their models. What starts as a request to rewrite a memo or debug a snippet of code can quickly turn into a data leak.

Menlo’s telemetry paints a vivid picture of how often this happens. In a single month, customers triggered 155,005 copy attempts and 313,120 paste attempts on GenAI sites. These actions aren’t inherently malicious, but they point to a widespread lack of awareness around data handling and risk.

Fake Sites, Real Damage: The Explosion of AI Domains and Apps

At the same time, the rapid growth of GenAI tools online is creating a different kind of risk, one that preys on user trust and brand familiarity. Menlo Security has tracked over 6,500 AI-related domains and more than 3,000 applications. Some mimic legitimate apps, using names like “ChatGPT App” or “Search Copilot AI Assistant for Chrome” to appear trustworthy. Others show up high in search results or masquerade as browser extensions that promise productivity boosts. Once installed, these fakes can drop ransomware, steal login credentials, or silently exfiltrate sensitive data.

Attackers know people trust AI-branded tools, and they’re using that trust as bait. By exploiting search engine indexing and typosquatting on popular names like Claude, Gemini, and Copilot, they’re turning user curiosity into a vector for compromise.

A Global View: Adoption vs. Regulation

GenAI usage is surging worldwide, but adoption patterns vary sharply by region. According to the report, the Americas lead in overall traffic, with U.S.-based users driving much of the volume. Meanwhile, Asia-Pacific is catching up fast—75% of organizations in China and 73% in India report using GenAI tools, according to recent surveys.

Europe, the Middle East, and Africa (EMEA) show slower uptake. The reason isn’t a lack of interest but regulation. Strict data protection laws and heightened scrutiny around transparency have created friction for organizations trying to integrate AI into everyday operations.

These regional trends offer a clear takeaway: the faster the adoption, the greater the risk of blind spots. In high-growth markets, companies must keep pace not just with deployment, but with policy. In more cautious regions, slower adoption may be a tradeoff for tighter control.

Either way, GenAI is not a one-size-fits-all journey. Enterprises need to align their AI strategies with both local regulations and internal risk tolerance.

Securing the AI-Powered Workspace

As GenAI becomes embedded in everyday workflows, securing the workspace starts at the browser. This is where most AI interactions happen and where many of the risks take root. Traditional network defenses aren’t built to detect AI-specific threats like dynamic phishing sites, malicious extensions, or shadow AI usage. What’s needed now is browser-native security with the ability to recognize and block AI-aware attacks in real time.

Equally important is policy-based access control. Organizations need to decide which GenAI tools are approved and block or limit access to everything else. That includes setting rules around copy/paste functions, upload/download permissions, and account types, especially when personal logins are involved.

But technology alone isn’t enough. Users need to understand how their everyday behavior—pasting sensitive data, clicking a fake AI link, logging in with a personal email—can introduce serious risk. Education around safe AI usage should be part of every company’s cybersecurity training.

“As AI systems become embedded into the tools and processes organizations depend on every day, cybersecurity plays a crucial role and is foundational to AI safety,” said Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace. “Organizations must be focused on applying cybersecurity best practices to protect models and invest in safeguards to keep AI systems protected at all stages of the AI lifecycle, to avoid unintended behaviors or potential hijacking of the algorithms.”

AI isn’t going away. As the tools evolve, so must the governance. That means putting guardrails in place now, not after something goes wrong.

Author
  • Contributing Writer, Security Buzz
    Michael Ansaldo is a veteran technology and business journalist with experience covering cybersecurity and a range of IT topics. His work has appeared in numerous publications including Wired, Enterprise.nxt, PCWorld, Computerworld, TechHive, GreenBiz, Mac|Life, and Executive Travel.