Salesforce Vulnerability Chain Exposes AI Agent Risks

ForcedLeak Salesforce AI agent

The role of AI agents continues to expand rapidly within enterprise workflows. A case in point is the Salesforce Agentforce solution. The AI-powered platform allows enterprises to customize autonomous agents to streamline customer engagements and to support sales, service, marketing, and e-commerce functions.

Last week, Agentforce users received quite a jolt as Noma Labs—the vulnerability research arm for Noma Security—announced the discovery of ForcedLeak. The critical severity (CVSS 9.4) vulnerability chain can potentially allow attackers to exfiltrate customer data through indirect prompt injection.

ForcedLeak specifically impacts Agentforce customers who enable Web-to-Lead functionality, which generates HTML forms to capture website visitor data and uses AI agents to create lead records from external data. In response to ForcedLeak, Salesforce immediately released patches that prevent Agentforce from sending agent outputs to untrusted URLs.

What Is ForcedLeak?

With a score of 9.4, ForcedLeak falls into the CVSS (Common Vulnerability Scoring System) critical severity category. An unprivileged attacker can trigger vulnerabilities in this category to compromise the confidentiality, integrity, and availability of an affected system.

As a sequence of multiple vulnerabilities in an interconnected chain, attackers can use the ForcedLeak indirect prompt injection method to attempt to exploit Web-to-Lead vulnerabilities in succession. Attackers can then gain high-level administrative access to systems and steal data.

“ForcedLeak is a chain of vulnerabilities that leads to disclosure of sensitive customer contact information and sales pipeline data,” says Mayuresh Dani, Security Research Manager, at Qualys Threat Research Unit. “The attack starts with indirect prompt injection, where attackers embed malicious instructions in external data, leading to the circumvention of AI model boundaries and exfiltration of data via a Content Security Policy (CSP) bypass. The disclosure of this vulnerability shows us the vastness of the attack surface that new and emerging technologies have.”

This vulnerability chain also highlights the unique risks posed by AI-powered agents. Unlike traditional prompt-response systems, AI agents process external data autonomously—often with access to knowledge bases, executable tools, and internal memory.

Since the ForcedLeak chain exploits the Salesforce Web-to-Lead functionality, external attackers can craft malicious inputs that trigger indirect prompt injection attacks and exfiltrate sensitive customer data.

How AI Agents Differ from Chatbots

As enterprises adopt AI-driven platforms like Salesforce Agentforce, the Noma Security findings emphasize the urgent need for security teams to rethink trust boundaries. They also need to look closely at their data flows and governance models.

AI agents are not just chatbots with better logic. Compared to traditional Large Language Models (LLMs), semi-autonomous AI agents vastly expand enterprise attack surfaces.

Sales and marketing workflows are particularly vulnerable as they rely on external lead data, which AI agents ingest and process without sufficient guardrails. Key components that get exposed include knowledge bases, tools, memory, and autonomy functions.

Implications for Enterprises

For enterprises that enable the Web-to-Lead function in Agentforce to drive customer acquisition processes, ForcedLeak presents a classic case study in AI agent risk. Large volumes of sensitive data are at stake.

This includes prospect data, sales pipelines, and PPI (personally identifiable Information) belonging to customers. ForcedLeak also exploits the boundary confusion between external data and trusted internal commands. As a result, existing security models fall short. That’s because they focus on protecting against external threats, leaving them vulnerable to AI-driven evasion techniques.

For CISOs and security architects, the lessons to learn from ForcedLeak include realizing the potential business impact of an exploit. Agentforce exposure could lead to compliance violations or competitive intelligence theft of customer records, sales pipeline data, or third-party integration data that reveals business strategies.

The vulnerability can also enable lateral movement via Salesforce integrations with other business systems, and basic employee interactions can trigger time-delayed attacks.

Next Steps for Security Teams

In response to ForcedLeak and to avoid disruptions to sales and marketing workflows, security teams need to review their governance and guardrail strategies while also testing and validating Agentforce agent interactions. It’s also a good idea to work closely with Salesforce to share the responsibility of securing Agentforce.

As a starting point, the team at Noma Labs recommends applying the Salesforce recommended actions to enforce trusted URLs for Agentforce and its Einstein AI platform, along with these proactive measures:

  • Audit lead data for suspicious submissions containing unusual instructions or formatting.
  • Implement strict input validation and prompt injection detection on user-controlled data fields
  • Sanitize data from an untrusted source.

“It's advisable to secure the systems around the AI agents in use, which include APIs, forms, and middleware, so that prompt injection is harder to exploit and less harmful if it succeeds,” says Chrissa Constantine, Senior Cybersecurity Solution Architect at Black Duck. “True prevention is around maintaining configuration and establishing guardrails around the agent design, software supply chain, web application, and API testing, as these are the complementary controls to consider to achieve true scale application security.”

A Wake-Up Call for AI Agent Adoption

As enterprises embrace agentic AI, the emergence of ForcedLeak underscores the importance of proactive defensive measures. Elad Luz, the Head of Research at Oasis Security, recommends that security teams know each agent’s access and avoid toxic combos while also owning your allowlist and verifying ownership. It’s just as critical to sanitize external input before the agent sees it and to track your vendors that use agentic AI as well as their advisories.

In the end, it’s critical to realize that just as AI is a powerful tool for accelerating internal workflows, it’s also a powerful tool for threat actors. Getting to know how it works on both ends and how to manage its uses is vital to fully leveraging the power of AI.

Author
  • Contributing Writer, Security Buzz
    After majoring in journalism at Northeastern University and working for <i>The Boston Globe</i>, Jeff Pike has collaborated with technical experts in the IT industry for more than 30 years. His technology expertise ranges from cybersecurity to networking, the cloud, and user productivity. Major industry players Jeff has written for include Microsoft, Cisco, Dell, AWS, and Google.