Why Enterprise Permissions Are AI's Most Dangerous Inheritance

permissions least privilege access report

Broken access control has led the OWASP Top 10 for six straight years, affecting 100% of tested applications in 2025. However, nobody had ever measured actual permission usage in production to answer the question of what actually happens to permissions once they’re granted.

The advent and growth of agentic AI tools have made this measurement urgent rather than purely academic, prompting agent permissions posture company Oso, in collaboration with Cyera Research, to put together the Least Privilege Research Report 2026. Analyzing 2.4 million workers and 3.6 billion permissions, the research is the first to truly examine the scope of the broken permissions risk problem.

What Fifty Years of “Least Privilege” Actually Looks Like in Practice

According to the report, 96% of permissions go unused across a span of 90 days, with even those users who take actions exercising only 17% of their granted privileges. Workers touch just 9% of the total amount of sensitive data within their reach, yet 31% are empowered to modify or delete data, and 13% hold access to sensitive information like regulated personally identifiable information (PII), financial data, and health records.

The structural root cause of this issue is that more than 80% of software-as-a-service (SaaS) access is managed through broad static profiles, with administrative privileges assigned to nearly 30% of users in some environments. Operational and productivity needs lead many organizations to put insufficient consideration into how roles are assigned and managed.

Why it Didn’t Matter Until Now

The bargain implicit in the legacy of overpermissioning is that, while sloppy, this method was survivable because human judgment, accountability, and working hours acted as natural friction against exercising excessive privileges. A person who is granted 100 permissions but uses 17 is a messy desk, not a crisis. There is a ceiling on how much damage any one person can do before they need to rest, and human users often avoid taking harmful actions due to their awareness of the potential negative impacts to the organization and themselves.

For these reasons, cleaning up permissions was always a lower priority than shipping product, and the economics of inaction held for decades. As long as human users and attackers were the primary and worst threat, the dangers of overprivileged accounts were not a pressing issue to solve.

The Agent Inflection Point: When Dormant Access Goes Live

AI agents introduce outsized risk when it comes to overpermissioning, as they lack the judgment and slowness of action that made this system tolerable. They run 24/7 at machine speed, able to take hundreds of actions per second with no concept of the consequences of their behavior. “AI tooling is nondeterministic and fallible by design,” says Serana Warren, Information Security Officer at Nutrient, a Raleigh, North Carolina-based platform for document processing and workflow automation. “This is not a weakness, but central to the inherent value proposition of heuristic computing as a concept.”

Susceptibility to prompt injection and hallucination means agents will confidently execute the wrong action while believing they’re being helpful, unaware that they may be causing extensive damage. Most organizations simply hand agents a copy of human permission sets by default, not considering that human judgment and behavioral patterns are what make these permissions not a threat. Assigning excessive privileges to agentic tools activates the dormant 96% of permissions at machine speed, introducing extreme risk.

It’s Already Happening: Incidents from the Field

The dangers of agentic AI with high levels of privilege are not hypothetical, but already borne out in real-world incidents. The AWS Kiro incident of December 2025, where a coding agent tasked with a minor bug fix deleted and recreated a production environment, led to a 13-hour outage and demonstrated the power that these tools have to cause extreme damage.

The risk is also highlighted by Anthropic’s November 2025 disclosure of state-sponsored actors found to be deploying AI agents against 30+ global targets—including banks, tech companies, and government agencies—at thousands of requests per second. The common thread between these incidents is that there is no breach or hacking required, only manipulation and leveraging of the access that was legitimately granted to agentic AI tools.

How Enterprise Systems Quietly Accumulate Excess

This research and incidents in the wild show that permission sprawl is not accidental, but a structural problem built into enterprise configuration. Salesforce deployments illustrate the trap of static profiles, where more than 80% of access is managed in broad bundles that the platform itself recommends against.

This leads to a cycle of accumulation—permissions granted to unblock a project or fix an issue are rarely revoked, compounding the problem over time. Permissions like “View All Data” and “Modify All Data” override normal sharing controls, and these permissions are inherited and used without hesitation by AI agents.

Industry Response and Emerging Security Architecture

With agentic AI on the rise and defenders and organizations becoming more aware of the risks of overpermissioned access, many are moving to develop better measures and infrastructure for handling this problem. Agent permissions posture management is emerging as a new security category distinct from traditional identity and access management. Addressing this challenge requires a governance shift from a static configuration that is reviewed annually to continuous visibility and control at the time of deployment.

Software and security company leaders explain why access models built for humans are unable to map cleanly to AI agents. “The biggest mistake companies are making with AI agents is assuming yesterday's identity model will hold,” according to Mark Hillick, CISO at Brex. On a similar note, 1Password CTO Nancy Wang says, “When agents are handed broad, static permissions, the unused ones don’t just sit there; they quietly expand the attack surface.”

What to Do: A Framework for the Pre-Deployment Window

Accounting for the increased risk introduced by overprivileged AI agents demands mitigating actions early and often. Permission sprawl should be audited before any deployment, and dedicated agent identities should be created with minimal, purpose-built permissions rather than copies of human credentials.

It is recommended to start agents in read-only mode and log every action from day one, as well as configure SIEM rules for out-of-scope queries and privilege escalation. It is also crucial to triage incidents by blast radius, expand access incrementally, and run red team exercises. Elevating the conversation to board level is a significant requirement in ensuring company-wide buy-in and execution of advanced measures.

The Window is Open, but Closing

The risk revealed in this report is already very real, but there is still time to execute plans for mitigating it. Most AI agents are still in pilot, and the moment to fix permissions is before these tools are deployed at scale, not after an incident occurs. The permissions crisis is not new, but AI agents are the first force powerful enough to weaponize it at enterprise scale. The authorization layer must be treated as a primary accelerant for safe AI adoption, not an afterthought bolted on after the first breach.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.