The Human Factor in Cybersecurity

Insider Risk Report Fortinet

Cybersecurity headlines tend to focus on the outside world—state-sponsored hackers, ransomware gangs, and criminal networks. But Fortinet’s latest Insider Risk Report, produced in partnership with Cybersecurity Insiders, shifts the focus to what’s happening inside organizations. The study found that 77 percent of organizations experienced insider-related data loss in the past 18 months, revealing that the biggest threats often come from within their own ranks.

The findings mark a shift in how risk is understood. External attacks may grab attention, but internal mistakes and misuse are where much of the real damage occurs. Fortinet’s research shows that many of these incidents aren’t acts of sabotage at all. Rather, they’re the predictable result of everyday employees mishandling data, bypassing policies, or falling for scams.

The Cost of Everyday Mistakes

Insider incidents have become both common and costly. Fortinet’s research found that 41 percent of the most serious insider-related breaches cost organizations between $1 million and $10 million. And those numbers don’t account for the less tangible losses that follow, such as disrupted operations, damaged relationships, and shaken customer confidence.

Most of these breaches weren’t driven by malice. Sixty-two percent stem from human error or compromised accounts rather than intentional wrongdoing. A rushed employee clicks the wrong link. Someone uploads a sensitive document to the wrong cloud folder. Another forwards files to a personal email account to finish work at home. These small lapses, repeated across thousands of users, can open doors wider than any external hacker could.

“The danger of the insider threat begins with trust,” said Chad Cragle, Chief Information Security Officer at Deepwatch. “A valid login acts as the ultimate skeleton key. An insider doesn’t need to bypass defenses; they are the defense. Their actions blend seamlessly with normal operations, camouflaged in plain sight, making detection extremely difficult.”

These incidents take a toll on the organization. When a breach traces back to an employee’s action—even when unintentional—it unsettles teams. Managers start to question how well they really know their own processes, and coworkers grow more cautious about sharing information. It takes more than fixing systems to restore that trust; it requires reestablishing confidence in the people and workflows that keep the business running.

Blind Spots and Bottlenecks

For many organizations, the problem is a lack of visibility. Seventy-two percent of security leaders in Fortinet’s survey said they can’t fully see how employees interact with sensitive data across endpoints, SaaS apps, and generative AI tools. That’s a massive blind spot at a time when a single misplaced upload can move confidential data beyond company control.

Part of the issue is what Fortinet describes as a false sense of security. Companies invest heavily in traditional Data Loss Prevention systems, yet only 47 percent believe those tools actually work. DLP was designed for a perimeter-based world, not one where files live in dozens of clouds and employees toggle between sanctioned and unsanctioned platforms.

Many security teams now find themselves drowning in noise. False positives pile up, analysts chase repetitive alerts, and fatigue sets in. Organizations have outgrown basic monitoring but haven’t built the unified visibility or behavioral context needed to move beyond it. They’re stuck between compliance and clarity, checking boxes but still blind to what’s really happening inside their networks.

GenAI and the New Insider Risk Frontier

Generative AI has added a new dimension to insider risk and one that’s easy to overlook. Employees experimenting with tools like ChatGPT or other GenAI assistants often don’t realize they may be sharing sensitive data outside the company’s control. What feels like a harmless productivity boost—asking an AI to draft a report or summarize customer feedback, for example—can quietly expose proprietary information to third-party systems.

Fortinet found that 56 percent of security leaders worry about this kind of GenAI-related data exposure, yet only 12 percent believe their organizations are ready to manage it. That gap reflects a deeper problem: many companies don’t have clear policies or visibility into how AI tools are used. As a result, data can flow through unsanctioned channels—what’s known as “shadow SaaS”—and even routine browser activity becomes a potential leak path.

The Rise of Behavior-First Security

Fortinet’s report points to a shift already underway. Instead of treating DLP as a static line of defense, organizations are beginning to use it as a window into how people actually handle information. The goal is to spot deviations before they snowball into full-blown breaches.

That means moving away from reactive enforcement toward proactive detection. Rather than flagging a violation after the fact, behavioral analytics can learn the normal activity patterns for each user and surface anomalies in real time.

Demand is growing for end-to-end visibility across data channels—tools that provide contextual insight in real time. Paired with AI-aware policy enforcement and governance that bridges security, HR, and compliance teams, these systems create a unified picture of user intent. The goal is a faster, more intelligent response that focuses on understanding behavior instead of simply blocking it.

From Control to Context

Fortinet closes its Insider Risk Report with five best practices for readiness: establish unified visibility across data channels, adopt behavior-based analytics, modernize DLP through integration with insider-risk tools, implement AI-aware policies that adapt in real time, and strengthen collaboration between security, HR, and compliance teams. Together, these steps form a roadmap for turning fragmented oversight into coordinated resilience.

The report’s larger message is that insider risk isn’t about blame.

“Human behavior is contextual, emotional, and adaptive,” said Dr. Margaret Cunningham, Vice President of Security & AI Strategy at Darktrace. “Stress, disengagement, or pressure to meet deadlines can push employees to cut corners, use unauthorized tools, or take shortcuts that put data at risk. These actions don’t always stem from malicious intent, but they can have equally damaging consequences.”

The challenge ahead is one of understanding, not control. As generative AI accelerates and data flows through more hands and systems than ever, security leaders need to know not just what happened, but why. The next generation of insider-risk programs will be built on understanding human behavior, not simply policing it.

Author
  • Contributing Writer, Security Buzz
    Michael Ansaldo is a veteran technology and business journalist with experience covering cybersecurity and a range of IT topics. His work has appeared in numerous publications including Wired, Enterprise.nxt, PCWorld, Computerworld, TechHive, GreenBiz, Mac|Life, and Executive Travel.