Why Overconfidence Is the Biggest Cyber Risk of 2025

Arctic Wolf cybersecurity overconfidence

In an ongoing period of rising threats and rapidly advancing technology, 2025 has become a year defined by massive human overconfidence in cybersecurity. There is a staggering disparity between how secure users and IT leaders believe their organizations to be and how susceptible they actually are to attacks. Arctic Wolf’s Human Risk Behavior Snapshot report clearly lays out the data to support this, offering detailed insight into the state of cybersecurity overconfidence. In spite of rising risks and continuing vulnerability, three-quarters of IT leaders believe their organizations are secure.

Clicking Toward Catastrophe

Employees and IT leaders alike continue to fall for malicious links as phishing attacks grow more sophisticated, targeted, and technologically advanced. While 76% of IT leaders surveyed in the report state that they are “confident their organization won’t fall for a phishing attack,” nearly two-thirds (65%) acknowledge that they personally have clicked on potentially dangerous links. Alarmingly, almost one in five have chosen not to report clicking on these links—8% who have had one such incident, and 9% who have clicked on more than one of these links.

Nonreporting could be due to fear of reprisal or simply embarrassment over falling for a phishing link, but it has severe implications for visibility, incident response, and trust. Organizations cannot effectively protect against the risks of phishing attacks if they are not even informed when these risks go from being a hypothetical threat to a real and present danger.

AI: The New Insider Threat

The most significant technological advancement that is impacting increased risks and attacks is the ongoing AI explosion. A shocking 60% of IT leaders and 41% of end users state that they somewhat agree or strongly agree with the statement “I have shared confidential information in LLM, such as ChatGPT.” The use of generative AI tools for business purposes or using enterprise devices and accounts leads to unintentional data leaks that occur without users’ knowledge and can cause extensive damage that is difficult to detect.

“Shadow AI” use erodes the control that organizations and users have over sensitive information. There are severe gaps in governance and AI data security in the broader landscape as the use of new and evolving technology outpaces the development of secure tools and policies for managing AI. “New AI tools and open-source software are being rapidly created and shared every day, and many are being used without the right guardrails and education,” says Dana Simberkoff, Chief Risk, Privacy, and Information Security Officer at AvePoint. “Developing AI literacy is essential due to the growing list of challenges for humans when AI is used.”

Leadership in the Crosshairs

Executives are increasingly being targeted by attackers, and often are the least protected against such attacks. Spear phishing, a variant of phishing that focuses on using in-depth information about high-value targets to compose extremely convincing messages, is the source of two-thirds of successful breaches, despite making up less than 0.1% of emails.

The report shows that 39% of leaders have been the initial target of phishing attacks, and 35% have been infected by malware. Senior-level breaches amplify organizational risk by enabling attackers to infiltrate and access highly sensitive data and systems. Threat actors targeting lower-ranking members of an organization often need to invest time and resources into lateral movement and privilege escalation in order to gain access to the information and network areas that they desire. Initially targeting an executive or senior leader often eliminates or significantly decreases this need, granting easier access to higher-value targets.

Culture Clash: Punishment vs. Prevention

Many organizations fall back on punitive responses in the case of a cybersecurity incident, presumably with the logic that fear of punishment will make users think twice before taking risky actions that could lead to security breaches, such as clicking on phishing links. According to the report, 77% of IT leaders state that they have terminated or would terminate an employee for making such a mistake. However, this reaction to a security incident tends to backfire as it is based on fear and shame, discouraging the disclosure of these mistakes after they have occurred.

On the other hand, corrective action in the wake of an employee falling for a phishing scam produces a measurable reduction in risk. Of the IT leaders who have taken corrective action, 88% say that the outcome was effective. This shift from shaming and punishing user error to reinforcing shared responsibility for the organization’s security is a more effective way to prevent and respond to cybersecurity incidents.

Neglected Basics, Global Consequences

While cyberthreats are indeed on the rise and growing more sophisticated, many organizations are also putting themselves at risk by neglecting basic principles and policies to protect against longstanding risks. There is an ongoing failure to prioritize multifactor authentication, with only 54% of IT leaders stating that their organizations enforce MFA for all users—down 5% from last year.

Organizations in Australia, New Zealand, the United Kingdom, and Ireland demonstrate steep increases in breaches year-over-year, emphasizing the global scope of modern threats. The simplest protections against attacks can often deliver the greatest returns in security in an increasingly interconnected global economy and digital landscape.

From Overconfidence to Awareness

The epidemic of cybersecurity overconfidence is a significant hindrance to achieving actual cyber resilience. Humility and human-centric design are crucial pillars of maintaining security against evolving and rising threats. Leaders are encouraged to view human users not as liabilities, but as the first line of adaptive defense.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.