DevSecOps Hits AI-Fueled Reality Check

DevSecOps AI Black Duck

A new global survey (sponsored by Black Duck Software) on the use of AI finds DevSecOps teams are deploying code daily or even faster. However, those same professionals indicate security onboarding remains stubbornly manual. Security control coverage is also alarmingly incomplete.

As these trends indicate, DevSecOps has achieved breakneck velocity. But much of that speed is built on sand.

The survey of 1,000 software and security professionals also found that the primary cause of delays in security onboarding is sprawl—across static and dynamic application security testing systems, as well as Software Composition Analysis, Infrastructure as Code, and other tools. This sprawl creates a roar of duplicative, low-signal alerts: 7 in 10 survey respondents say all that “noise” consumes their time and budget, not to mention their trust in the results.

Another signal coming through in all this static is that DevSecOps teams don’t want yet another tool. Instead, they want security embedded natively where their work happens—IDEs and CI/CD pipelines. This will allow them to ship code fast and right, e.g., securely.

Commenting on the Black Duck report, Trey Ford, Chief Strategy and Trust Officer at Bugcrowd, said, “The findings on tool sprawl are of interest as it certainly does impact security teams. This comes down to where integration and operational efficiency are placed ahead of best-of-breed technology selections.”

The DevSecOps Trust Gap

At the same time, artificial intelligence (AI) adds jet fuel and turbulence to DevSecOps processes. Most practitioners believe coding assistants simultaneously make code more secure while also introducing novel, harder-to-detect risks.

And while daily and multi-daily software releases are the norm, security onboarding and coverage don’t keep pace. As the Black Duck report found, nearly 46% of companies rely on manual processes to move new code into the security testing queues.

This automation gap means many businesses remain unaware of their vulnerabilities: 62% of organizations test less than 60% of their applications. This can lead to critical consequences as development velocity without vulnerability verification compounds security debt during every sprint.

AI Acts as a Multiplier and a Minefield

Many DevSecOps teams hold contradictory ideas about AI—simultaneously thinking of AI as a threat and an ally:

  • 57% say AI coding assistants introduce new security risks: the scale and speed of vulnerabilities, compliance nightmares, and proprietary code ending up in someone else’s training model.
  • 63% say AI tangibly improves their ability to write more secure code—with faster vulnerability identification, better coding consistency, and quicker fixes.

In addition, organizations have adopted AI so quickly that governance can’t keep up. 11% of respondents use AI coding assistants without official permission or in an unverified and unmonitored way. This comes as no surprise since developers grab whatever tools they can, exposing their companies to unmanaged security and compliance risk.

The use of open-source AI models is even more widespread. Nearly 97% of organizations use these models in the software they build, further escalating governance challenges.

Many survey respondents admit their toolchains rely on manual processes, have low coverage, and produce an incredible amount of noise. However, when asking respondents how confident they were in their ability to manage the risks introduced by AI coding assistants, 89% feel confident they can handle new and complex security issues introduced by Copilot, Claude Code, ChatGPT, and other AI-coding assistants.

In addition, 93% feel confident they can manage the open-source license risks from AI-generated code—a problem many security tools can’t see. This confidence in managing AI risks is high—perhaps too high—given the prevalence of shadow AI and near-universal use of open-source models.

What DevSecOps Teams Really Want

The return for using AI isn’t in adding scanners; it’s in correlating, prioritizing, and automating workflows that developers will actually adopt. DevSecOps teams want to shift their viewpoint of AI from fear to a force multiplier. This means using AI for issue summaries, fix suggestions, and unit-test hardening—under guardrails.

But the #1 improvement priority, according to the survey, is better development through security workflow integration. This requires four key attributes:

  • Real-time IDE feedback
  • Policy-aware CI gates
  • Deduped cross-tool risk graph
  • The use of time-to-remediate as a north-star KPI

For governing the AI pipeline, the minimal viable program should include a usage policy, a model registry, prompt data hygiene, and license/software bill of materials tracking for model and code artifacts.

The Way Forward

“The Black Duck findings highlight a pattern we’re seeing across the industry,” said Randolph Barr, Chief Information Security Officer at Cequence Security. “Organizations have achieved impressive development velocity, but their security practices and automation maturity haven’t caught up. Many are still struggling to embed security seamlessly into the development process, and AI is now magnifying that gap.”

To overcome the challenges identified by Barr, and for the use of AI in DevSecOps workflows to succeed, three shifts need to occur:

  1. Consolidate security signals
  2. Embed models into developer workflows
  3. Treat AI as both governed input and a secure-coding accelerator

When these shifts occur, DevSecOps teams will experience fewer false-positive security alerts. They will also achieve faster releases with less rework and realize measurable trust gains among team members across the software development lifecycle.

“While the goal of DevSecOps has always been about balancing security and productivity, this report highlights that shipping fast without mature security is still the default for many, said James Maude, Field CTO at BeyondTrust. “This is a common challenge for many organizations, as the saying goes Good, Fast, Cheap…pick two.”

Author
  • Contributing Writer, Security Buzz
    After majoring in journalism at Northeastern University and working for <i>The Boston Globe</i>, Jeff Pike has collaborated with technical experts in the IT industry for more than 30 years. His technology expertise ranges from cybersecurity to networking, the cloud, and user productivity. Major industry players Jeff has written for include Microsoft, Cisco, Dell, AWS, and Google.