New Report Warns of AppSec Fatigue and AI Overconfidence Threatening Open Source Software Security

Code 820275 1280

The 2024 State of Open Source Security report reveals a troubling new trend: “AppSec fatigue,” where open source development teams are increasingly overwhelmed by the high volume of security vulnerabilities they must address to develop secure, cloud-native applications.

According to Ori Bendet, Vice President of Product at Checkmarx, AppSec fatigue is real. “Teams are exhausted by the number of security findings and ever-changing software architecture that creates new risks for applications,” he said. “Most developers are not measured on security, but on velocity and quality. This creates the reality where teams may inadvertently push vulnerable code into production simply because they have to.”

The report attributes growing AppSec fatigue to two factors: the declining investment in security practices when developing applications and a misplaced confidence in AI-generated code. The combination could lead to new concerns related to the industry’s ability to balance proven security strategies with new innovations such as AI. As a result, future software applications may not be fully secure and may be more vulnerable to emerging threats and growing risks.

The State of AppSec Fatigue Today

Why are development teams burned out when it comes to focusing on security in their applications? One reason is that the entire industry is currently experiencing a declining investment in security training and tooling, leading to proactive security measures dropping significantly year over year.

Today, 52% of new applications fail to meet service level agreements (SLAs) for high-severity vulnerabilities. Additionally, only 35.4% of organizations said that they invest in security training in 2024, a drop from the 53.2% who reported the same in 2023.

The implication is clear: growing AppSec fatigue is limiting organizations’ abilities to identify and defend against evolving threats.

Immature Open Source Supply Chains

Open source software (OSS) is an important cornerstone of modern application development, but its popularity and widespread adoption have exposed concerning gaps in OSS supply chain practices related to security. Like a physical supply chain, any weakness in any stage of the OSS supply chain can lead to security vulnerabilities in the application.

In looking closer, one critical issue is the underutilization of tools designed to secure the software supply chain. The report found that while 62% of organizations monitor their software bill of materials (SBOMs), far fewer adopt advanced safeguards such as automated vulnerability scans or artifact signing. This situation subjects software builds and deployment environments to increased risks from compromised dependencies in the supply chain.

Worse, the report also found that there has been a 15% drop in security tooling, even as supply chain attacks grow. High-profile software supply chain cyberattacks – such as 2020’s SolarWinds campaign – demonstrate how these vulnerabilities can develop and go undetected and prove damaging to thousands of organizations if not more.

Addressing this risk requires the integration of early-stage tooling into all parts of OSS supply chains. Automating SBOM inspections, mandating signed artifacts, and ensuring dependency health are all critical steps to strengthen OSS supply chains and detect vulnerabilities before deployment.

AI-Generated Code: A Double-Edge Sword

AI-powered coding tools have already transformed past software development processes to the point where many developers now trust AI to help them improve code quality and security.

Yet, according to Danny Allan, Chief Technology Officer of Snyk, this view demonstrates a misplaced confidence in AI-generated code. “The report reveals a stark contradiction – 56% of respondents are concerned about vulnerabilities introduced by AI, yet nearly 78% believe AI has improved code security,” he said. “This blind trust in AI underscores a critical gap in understanding its limitations.”

AI-generated code has been found to introduce subtle yet potentially serious vulnerabilities as part of the development process, especially if the underlying AI models were trained on flawed or outdated data.

Bridging this gap requires organizations to treat AI as an assistant – not a substitute – for more secure coding practices. Developers should review AI-generated code for vulnerabilities, test it rigorously, and integrate it into robust security workflows. In addition, IT teams should closely monitor the use of AI tools to make sure they are updated with reliable training data and used in conjunction with established security standards.

Progress in the Open Source Community

The situation may not be as bleak as it may seem. The open source community has emerged as a bright spot in addressing security vulnerabilities and often outpaces commercial software solutions in its ability to fix them. Collaborative efforts among developers and security teams have significantly reduced resolution times for critical issues across a wide landscape of projects and programming languages.

By openly sharing knowledge, tools, and fixes, the OSS community sets a powerful example of how collaboration can address security concerns to enhance resiliency and reduce risk. These practices offer valuable lessons for the entire software industry, highlighting the importance of transparency and collective responsibility.

The Path Forward: Striking a Balance

To address evolving challenges in application security, organizations need to adopt strategies that balance comprehensive protection with operational efficiency, including:

  • Reassess security priorities to address AppSec fatigue by focusing on the most critical threats and streamlining security processes.
  • Invest in foundational and advanced supply chain security measures, including automated monitoring, signed artifacts, and dependency management to strengthen the software lifecycle.
  • Treat AI-generated code with rigorous scrutiny to ensure that vulnerabilities introduced by AI tools are caught early through oversight, validation, and consistent review.
  • Focus vulnerability management on meaningful risks with realistic SLAs to make sure resources are allocated effectively and that the most significant threats are prioritized.

By following these best practices, organizations can develop more proactive security strategies and empower their teams to overcome AppSec fatigue to respond more effectively to emerging security threats.

Author
  • Contributing Writer
    Jason Rasmuson is a Massachusetts-based writer with more than 25 years of experience writing for the technology and cybersecurity industries. He is passionate about writing about the interaction between business…