GitLab’s AI Vulnerability Highlights the Dark Side of Prompt Injection

GitLab prompt injection

GitLab recently released new versions (18.5.2, 18.4.4, 18.3.6) of GitLab Community Edition (CE) and Enterprise Edition (EE) as an emergency patch for several new vulnerabilities. One of these vulnerabilities can enable attacks taking advantage of GitLab’s AI capabilities, highlighting the risks that accompany the increasing adoption of AI copilots and assistants in software pipelines. Vulnerabilities related to AI tools are fundamentally different from traditional exploits, as they can allow attackers to leverage generative and agentic functions to carry out sophisticated attacks beyond the scope of traditional technology.

Anatomy of the Vulnerability

Attackers use prompt injection techniques to embed hidden malicious instructions in ostensibly innocuous requests, causing target systems to behave abnormally. The GitLab Duo flaw noted in the patch, documented as CVE-2025-6945, is a prompt injection vulnerability that could allow authenticated users to inject hidden prompts into merge request comments. This can deceive the AI into enabling the exposure of sensitive information from confidential issues.

The GitLab Duo vulnerability affected a variety of GitLab EE versions from 17.9 onward, the versions in which the affected code was incorporated. The exploit could allow attackers to take advantage of Duo’s AI in order to leak confidential information and secrets without using explicit malware or requiring direct access to sensitive resources. This type of vulnerability can have a major impact on data exposure, allowing attackers to access confidential information via simple exploitation of AI functionality and logic.

The Broader Implication for AI-Driven DevSecOps

This vulnerability emphasizes some of the most pressing issues in AI security and operations. Integrating LLMs into development workflows expands the attack surface by enabling a wider range of malicious activity, including behaviors and actions that developers and users cannot foresee or protect against. It is difficult to detect and mitigate “semantic” attacks against AI systems that are achieved by tweaking prompts and taking advantage of the adaptive and responsive nature of AI.

Vulnerabilities like this highlight the need for input validation, sandboxing, and contextual segmentation, as well as other AI-targeted measures. “Threat modelling must explicitly include LLM-specific risks like prompt injection, data exfiltration via generated responses, and AI-driven authorization bypass, alongside more familiar issues such as XSS, incorrect authorization, and information disclosure that were also addressed in this GitLab patch set,” says Christopher Jess, Senior R&D Manager at Black Duck, a Burlington, Massachusetts-based provider of application security solutions.

Expert Commentary

Security experts note that the GitLab Duo vulnerability is representative of the significance of AI-related flaws. “From an application security perspective, this incident highlights that AI-based attack vectors are a major real-world concern,” according to Jess. “Organizations relying on similar AI assistants should treat all AI input channels - comments, descriptions, commit messages, even markdown links - as untrusted and subject to abuse.”

The GitLab flaw is not an isolated incident, but one example in a trend of prompt-injection attacks across many platforms, with multiple recent incidents involving ChatGPT exposing secrets like Windows product keys and sensitive user data. These flaws underscore the vital importance of vigilance regarding open-source code and tracking of software dependencies.

Building AI Security into the Pipeline

With AI usage and risks continuing to rise, it is more important than ever to ensure AI security throughout the lifecycle. Certain best practices emerging in the industry include red teaming LLMs, auditing model inputs and outputs, and limiting token context. Organizations can prepare for regulatory frameworks around AI safety by implementing robust AI security policies and tools now, including measures for governance and accountability. It is crucial for organizations to secure AI instances before scaling.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.