OpenAI Bets on AI Security With Promptfoo Acquisition

OpenAI Promptfoo

The AI explosion of the past several years, marked by the booming popularity of generative AI for personal and professional functions, has been largely focused on ensuring performance and quality of these tools. Agentic AI tools, given the power to perform tasks autonomously within enterprise systems, present even more severe security and compliance risks. In response to the ongoing trend of growth in AI usage and increasing awareness of the security risks that AI introduces in enterprise environments, many organizations are now shifting their focus to prioritize trust, governance, and operational safety. OpenAI’s recent announcement of the acquisition of AI security platform Promptfoo is a step toward securing widely used AI infrastructure.

What Promptfoo Brings to OpenAI

Founded in 2024, Promptfoo is an open-source platform designed for AI developers to conduct security tests on their applications. Promptfoo offers automated evaluation frameworks for LLMs and AI agents, red-teaming capabilities designed to simulate adversarial attacks, and testing for risks like prompt injection, jailbreaks, data leaks, and unsafe automation. By helping developers test the security of their applications, the platform enables improved safety and compliance in a sphere rife with risk.

Promptfoo has seen significant growth since its founding, with widespread adoption among developers and enterprise security teams. The team has developed a suite of tools that more than one-fourth of Fortune 500 companies rely on. The platform’s functions play a significant role in ensuring the security of AI applications during development.

Why AI Agents Create a New Security Problem

The initial stages of the AI boom were mainly focused on generative tools like chatbots, but there has been a more recent shift toward reliance on agentic systems that have the power to take autonomous action in the real world. These tools in enterprise environments interact with a wide range of systems and applications, including APIs, SaaS platforms, and workflows. They often have far-reaching access, enabling their agentic activity that can provide productivity and operational benefits in enterprise systems.

However, the use of AI agents with outsized privilege and reach also creates additional risks that many organizations are not equipped to handle. They can be subject to dangers from both malicious and inadvertent activity, including prompt injection attacks, data exfiltration, misaligned agent behavior, and unauthorized automation. Due to the extensive permissions and access that agentic AI tools have, these risks can cause severe damage to critical systems.

Security Becomes a Core Feature of AI Platforms

The Promptfoo acquisition by OpenAI is a major indication of increasing investment in security as a critical part of AI tools and platforms. OpenAI will be integrating Promptfoo’s technology into its Frontier platform for the creation and operation of AI agents. This integration underscores the increasing industry focus on AI security assurance at every stage of the lifecycle.

The addition of Promptfoo’s functionality embeds evaluation and security testing directly into the AI development lifecycle, highlighting the importance of AI security at every stage. Promptfoo’s technology enables continuous monitoring and auditing of AI behavior to ensure the ongoing safety of tools that have the potential to cause significant damage if not secured properly. This acquisition by a major player in the AI landscape demonstrates that security is a prerequisite for enterprise adoption.

The Rise of “AI DevSecOps”

Promptfoo’s functionality is indicative of an ongoing shift, moving security earlier in the AI development pipeline. The growth of AI DevSecOps involves the automated testing of prompts, workflows, and model outputs, as well as continuous evaluation similar to CI/CD pipelines in software development. This is one indication of the AI safety conversation evolving from policy discussion to an engineering discipline.

This is a significant trend in AI security, as it stresses the importance of accounting for security concerns early in the tool lifecycle. Experts stress that creating resilient, secure AI tools still demands robust security considerations. “AI-assisted code scanning can certainly improve developer productivity and help catch certain vulnerabilities earlier in the development process, but it cannot replace the broader visibility, governance, and risk management required to secure modern software ecosystems,” says Julian Totzek-Hallhuber, Manager Solution Architects EMEA South and APAC, Veracode.

A Signal to the AI Industry

This acquisition demonstrates a trend in the AI industry that is likely to continue gaining traction as the integration of Promptfoo into OpenAI moves forward. Security startups like Promptfoo are increasingly becoming strategic targets for acquisition by major industry leaders, signaling a focus on securing development to protect against the dangers endemic to AI usage, especially in enterprise environments.

The AI industry is showing a growing demand for AI governance, evaluation, and red-teaming tools to secure AI tools in the face of rising risk. More and more organizations are recognizing the dangers of unsecured and poorly governed AI tools and agents and taking steps to mitigate risk. Enterprises are now demanding provable safety and compliance before deploying AI, rather than implementing AI tools before security and governance can catch up.

The Bigger Picture

The use of AI agents is shifting, from experimental use in certain areas of organizations to deep integration into operational infrastructure. The determining factors for which platforms dominate the enterprise AI market in the coming years will be trust and security. As AI deployment and risk both continue to evolve, the future of AI may hinge on security engineering just as much as model innovation.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.