AI Is Reshaping Risk Faster Than Strategy Can Catch Up

insider risk AI cybersecurity

Ponemon Institute has released the 2026 Cost of Insider Risks Global Report, sponsored by Dtex, to explore the state of insider threats in the modern landscape. The report reveals that insider risk costs have risen to $19.5M annually, up from $17.4M in 2024. This steady upward trend signals systemic failure that goes far beyond isolated incidents. Insider risk in 2026 and onward is an issue of enterprise resilience, not just a security metric.

Negligence Is the Real Threat Multiplier

According to the report, insider negligence accounts for $10.3M (53%) of total losses, a year-over-year increase of 17%. This means more than half of insider risk arises from users either not knowing about or not taking the time to carry out best practices, rather than from malicious or purposeful actions. Through ignorance or negligence, many inside actors can inadvertently cause data exposure, enable infiltration, disrupt operations, and more.

The growing use of AI in enterprise environments accelerates data access, sharing, and exposure, further compounding these challenges. The risk of negligent user mistakes leading to preventable harm increases as employees use generative AI tools without governance guardrails in place. Increasing AI usage and employee negligence is driving a shift in insider risk from malicious intent to amplified user behavior.

The Three Faces of Insider Risk

There are three main forms of insider risk outlined in the report. The previously mentioned negligent insiders, ignorant of the dangers or simply failing to take proper precautions, cause damage with everyday actions that can have exponential consequences. Malicious insiders, on the other hand, are those who deliberately take actions to harm the organization from within for nefarious purposes. These actors make up 27% of incidents and $4.7 million in costs.

The third category is outsmarted insiders, accounting for 20% of incidents and $4.5 million in costs. These unsuspecting users have their credentials stolen and accounts compromised by malicious outside actors. Identity compromise still behaves like an insider problem as attackers use the access and permissions granted by infiltrating legitimate accounts to achieve privilege escalation, lateral movement, and further malicious activity.

AI Has Changed Behavior—But Not Strategy

Of those cited in the report, 92% say that generative AI has fundamentally changed data access and sharing, while only 13% have formally integrated AI into business strategy. This is indicative of a significant governance lag as innovation races ahead of policy and creates security gaps where employees make use of emerging technology. Organizations lacking official policies and guidelines for AI usage do not stop their employees from using AI for business operations.

AI adoption without executive alignment leads to many risks associated with shadow AI and ungoverned, unmonitored technology use. Sensitive enterprise data can be exposed to unsecured AI tools, enabling both accidental leakage and deliberate manipulation by internal and external malicious actors.

The use of AI can also help bad actors by enhancing externally-motivated risks, leveraging insider access, like those that achieve credential theft through phishing and other social engineering tactics. “The modern landscape also includes synthetic insiders—AI-powered impersonations that exploit human trust with startling realism,” says Dr. Margaret Cunningham, Vice President of Security & AI Strategy at Darktrace, a global leader in AI for cybersecurity. “With AI-generated voices, deepfake videos, and synthetic personas, outsiders can convincingly impersonate trusted employees.”

AI Agents: The New Insider?

The report shows that 44% believe AI agents will increase the risk of data theft. However, only 19% classify AI agents as equivalent to human insiders, creating a dangerous gap between perceived risk and policy treatment. The growing presence of agentic AI in enterprise systems presents significant dangers that organizations are rarely properly equipped to handle.

Machine identities acting autonomously within enterprise systems introduce new and greater risks that traditional security measures are not designed to protect against. Agentic AI tools are not capable of internalizing the security principles and discernment that human users possess and are trained to exercise. They can be manipulated to cause major damage from within an organization due to extensive permissions and access combined with the shortcomings of machine logic.

Rethinking Insider Risk in the AI Era

Mitigating insider risk in modern enterprise environments demands comprehensive visibility across human and non-human identities. While traditional insider risk strategies have been heavily focused on preventing threats driven by human error and human malice, relying on measures like thorough employee cybersecurity training and identity and access management (IAM). Static role-based controls are ineffective in protecting against agentic AI risks, emphasizing the need for behavior-based threat detection.

It is crucial for organizations to develop governance frameworks for AI-assisted workflows, ensuring that AI tool usage is managed as securely as any other technology. Agentic AI operates differently from other technological tools or human users, demanding a reevaluation of security efforts. Making the shift from reactive incident investigation to predictive risk management is important in protecting against insider risk in the AI era.

The Insider Redefined

The newest Ponemon Institute research highlights one of the most significant truths in insider risk in 2026 and onward: that the insider is no longer just an employee. The increasing adoption of AI tools and agents has dissolved the boundary between user and tool, creating many non-human identities without adequate visibility and management. Organizations must align strategy, governance, and detection models as the measurable cost of inaction continues to grow.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.