Agentic AI browsers are being sold as the next productivity leap, tools that can summarize content, move across tabs, connect tasks across apps and services, and act on a user’s behalf. But new research from Zenity Labs suggests those same features may create a new opening for abuse, allowing attackers to manipulate how the AI interprets and carries out tasks.
Zenity’s PleaseFix and PerplexedAgent vulnerability chain is not a conventional software bug. It exposes a deeper security problem in how agentic systems interpret instructions and decide what to trust.
Agentic browsers do more than display information. Products such as Perplexity’s Comet are designed to help users complete workflows, which means they are not just rendering content but interpreting language and acting on it. That creates a trust problem that conventional browsers were not built to handle. The system has to distinguish between instructions that come directly from the user and instructions embedded in the content it is reading. When that boundary breaks down, a webpage, email, document, or calendar invite can stop being just content and start functioning like a prompt.
How the Attack Works
According to Zenity’s findings, an attack can begin with a legitimate-looking calendar invite containing hidden instructions aimed at the AI agent rather than the human recipient. Once the invite is accepted, the agent may begin carrying out actions it believes are authorized, including browsing local directories, opening files, reading documents, and transmitting data to attacker-controlled infrastructure.
Zenity said its research also identified a second exploit path, one that abused agent-authorized workflows inside authenticated sessions to access password-manager data, steal credentials, and in some cases enable full account takeover without directly exploiting the password manager itself.
“The AI agent attack surface doesn’t require malware, exploits, or elevated access,” said Ram Varadarajan, CEO at Acalvio. “It just needs content the agent reads, which is exactly what agents are built to do.”
Why Enterprises Should Pay Attention
For enterprises, the risk extends well beyond a single browsing session. As autonomous agents gain access to corporate documents, collaboration tools, shared drives, customer records, and internal applications, a successful prompt-injection attack could expose or exfiltrate sensitive data without the signals defenders usually associate with a breach.
That leaves security teams defending against activity that may resemble normal workflow behavior. Instead of exploiting software in the usual sense, an attacker may only need to shape the content an agent consumes. Defending against that kind of abuse will require stronger trust boundaries, prompt isolation, tighter permissions, and monitoring built for LLM-driven actions.
The problem also appears to extend beyond any single browser vendor. Zenity’s disclosure points to architectural issues that may affect a wider class of AI-driven browsers and assistants, especially those built to ingest external content and take action on a user’s behalf.
“AI-native browsers are introducing system-level behaviors that traditional browsers have intentionally restricted for decades,” said Randolph Barr, Chief Information Security Officer at Cequence Security. “That shift breaks long-standing assumptions about how secure a browser environment is supposed to be.”
The concern may extend beyond managed enterprise deployments. Employees often test new tools on personal devices before those habits carry into the workplace through browser sync, BYOD access, or remote work setups. AI-enabled browsers may also be easier for attackers to identify through distinctive APIs, extensions, network patterns, and other agentic behaviors.
What Organizations Can Do
For organizations considering these tools in sensitive environments, the baseline requirements are coming into focus. Security teams will need visibility into what the browser can access, control over embedded extensions and permissions, and safeguards that treat AI agents as privileged actors rather than convenience features.
Zenity said Perplexity fixed the browser-side issue before disclosure, and that the local-file attack no longer works as shown.
Agentic browsing promises to save time by letting systems interpret intent and carry out the next steps automatically. But that model works only if the system can reliably verify whose intent it is executing. Until then, the same browser meant to help users work faster may also give attackers a quieter way in.