
The embedded software industry is in the middle of a major reset. According to Black Duck’s State of Embedded Software Quality and Safety 2025 report, AI tools are changing how code is written, tested, and shipped, while demands for supply chain transparency are rewriting the rules for how that code reaches the market. The industry is moving quickly, but the safeguards to keep that progress in check aren’t keeping pace. That gap leaves organizations exposed not just to defects and vulnerabilities, but to the deeper risk of losing customer trust.
AI Everywhere, But Without Guardrails
AI has become inseparable from embedded development. Nearly nine in 10 companies now use AI-powered coding assistants, and 96% have embedded open-source AI models directly into their products. That speed and convenience come at a cost, though. More than one in five organizations admit they aren’t confident they can stop AI-generated code from introducing new vulnerabilities.
“AI systems, and especially agentic tools, are fragile to manipulation because their behaviors can be drastically altered by malicious or poorly formed prompts,” said Diana Kelley, Chief Information Security Officer at Noma Security. “A single malformed prompt can reasonably result in wiped systems.”
The risks don’t stop there. Eighteen percent of companies know their developers are using AI tools against policy, and that “shadow AI” operates outside formal oversight. This is more than a compliance issue. AI models are trained on massive swaths of open-source code, and without proper controls, they can spit out snippets tied to restrictive licenses. That creates the possibility of “license laundering,” where borrowed code sneaks into proprietary products and exposes companies to unexpected IP liability.
Transparency as a Competitive Imperative
While AI is reshaping how code is written, supply chain demands are reshaping how it’s delivered. Seventy-one percent of organizations now say they must produce a software bill of materials (SBOM). And it isn’t regulators leading the charge. Customers and partners are the loudest voices asking for SBOMs, outpacing industry rules as the primary driver. If you can’t show what’s inside your code, you’re at a disadvantage in winning business.
“SBOMs bring visibility into which components are being used in a project,” said Mayuresh Dani, Security Research Manager, at Qualys Threat Research Unit. “This can help in a post-compromise scenario where triaging for affected systems is necessary.”
But generating an SBOM once at release isn’t enough. As products ship and vulnerabilities emerge, those inventories need to be kept up to date. Continuous monitoring after deployment is quickly becoming the standard, turning SBOMs into living documents rather than static compliance artifacts.
Developers Under Pressure
The job of the embedded developer is also changing fast. According to the report, four out of five companies now use memory-safe languages like Rust, Go, or Python. Python in particular is gaining ground on C++, especially in projects tied to AI and data processing. The shift signals a broader rethink of what “safe” and “modern” code looks like in embedded contexts.
That doesn’t make the day-to-day any easier. Developers point to system complexity (18.7%) and tight release timelines (18.1%) as their biggest obstacles. Managers may think things are going smoothly—86% of CTOs and VPs rate projects as successful—but only 56% of engineers agree. For them, success often comes with compromises, technical debt, and late-night patching. The gap between the view from the corner office and the view from the keyboard is widening, and it’s a pressure point the industry can’t afford to ignore.
Why “Shift Left” Isn’t Enough
For years, security leaders have pushed the idea of “shifting left,” meaning moving testing and security checks earlier in the development cycle so problems are caught before release. But the data shows that’s no longer enough. Risks enter the pipeline at every stage: AI-generated code at the start, rushed testing in the middle, and newly discovered vulnerabilities long after products ship.
That reality demands a broader mindset. Governance and monitoring can’t just sit at the front end of the process. They need to extend across the entire life cycle—what some now call a “shift everywhere” approach. From the developer’s desktop to post-deployment patching, embedded software demands constant visibility and control.
Recommendations for Industry Stakeholders
Different roles in the industry face different responsibilities, but the message is consistent: act now.
For technical leaders, the advice is blunt—treat AI like a smart but unreliable intern. It can draft code, but every line needs review and validation before it’s trusted. That means scaling up testing, integrating tools into the IDE and CI/CD pipeline, and never assuming AI output is production-ready.
Managers can’t sit back either. With nearly one in five developers using shadow AI tools against policy, organizations need clear rules. Formalize governance, define what’s allowed, and track usage. Just as important, invest in training. Teams need skills in memory-safe languages and secure AI use if they’re going to keep pace.
For security and compliance teams, the to-do list is even longer. They must audit existing tools for their ability to handle AI-generated code and embedded open-source models. Expand threat models to account for shadow AI, license laundering, and prompt-driven manipulation. And don’t treat SBOMs as paperwork; turn them into business assets. A living SBOM that customers can trust can be a differentiator in tight markets.
“Effective AI governance requires deep cross-functional collaboration,” said Nicole Carignan, Senior Vice President, Security & AI Strategy, and Field CISO at Darktrace. “Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation.”
Outlook: Winners and Laggards
The next year will draw a sharper line between leaders and laggards. By 2026, expect a surge of AI governance tools as vendors and enterprises scramble to close today’s gaps. At the same time, SBOMs will shift from a best practice to a non-negotiable line item in B2B contracts. Buyers will want proof of what’s inside the code before they sign.
The companies that move early—building real AI oversight into their workflows and treating transparency as a selling point—will have the advantage. Those who delay will find themselves fighting uphill against eroded trust, legal exposure, and a market that’s already moved on.