The growth of AI in recent years has led to its widespread use for not only legitimate applications, but cybercriminal operations as well. Threat actors have increasingly turned to AI-enhanced tools for a variety of purposes, but a recently spotted campaign has taken AI usage to the next level, creating a watershed moment—AI has moved from a hacking assistant to a hacking agent.
AI safety and research company Anthropic warned in early October that AI’s cyber capabilities had doubled in only six months, pointing toward an inflection point in AI security. This intelligence indicates that AI models have become of real use in both cybersecurity and cyberthreat processes. Dual-use AI technology has created a landscape where the line between human-directed and AI-driven operations has collapsed.
Anatomy of an AI-Orchestrated Attack
The lifecycle of the autonomous campaign began with the human attackers selecting relevant targets and developing a framework to carry out targeted attacks. The attack leveraged Claude Code, which the attackers first had to jailbreak in order to circumvent its built-in guardrails against nefarious activity. They achieved this by breaking their malicious activity into smaller and more innocuous tasks that the tool would not flag as harmful.
The attack phases included reconnaissance into target systems, exploit code creation, credential harvesting, and exfiltration of sensitive data. The autonomous nature of the attack enabled the campaign to be carried out with extreme speed and scale, sending thousands of requests per second and incorporating continuous learning and self-documentation.
The New Era of “Agentic” AI
Agentic AI systems, which are capable of chaining reasoning, acting autonomously, and using external tools, are on the rise. With the ability to autonomously carry out operations across systems, agentic AI architecture enables attackers to launch long-running campaigns that sustain themselves with minimal supervision. When compared with prior “human-in-the-loop” attacks, this campaign largely removed the loop, empowering large-scale attacks without frequent human involvement.
The shift in attack tactics is a significant step that defenders must look to in order to protect against threats now and moving forward. “This highlights how agentic AI significantly lowers the bar for sophisticated, targeted attacks, effectively giving a single entity the capabilities of a full hacking team,” says Noelle Murata, Sr. Security Engineer, Xcape, Inc., going on to add, “The time for predictive AI defense is over; the future of cybersecurity is a real-time, autonomous AI war.”
Implications for Cyber Defense and Policy
This type of attack highlights the beginning of an era with extremely low requirements for time and resources on the part of threat actors. The barrier to entry for advanced cyber operations has collapsed, creating a significant asymmetry between the investment needed from cybercriminals and the potential consequences of their actions—one compromised API key can unleash an autonomous actor operating faster than any human team.
Anthropic’s response to the campaign involves a number of adjustments to previous security measures to better account for the era of autonomous AI tools. This includes the development of improved classifiers and expanded detection capabilities, as well as a commitment to transparency through public reporting. The disclosure of this campaign also has implications for policy and industry trends, possibly influencing more guardrails for AI APIs, shifting standards for detecting misuse, and increasing the sharing of threat intelligence.
Turning the Same Tools to Defense
Anthropic’s analysis of the campaign emphasizes a paradoxical insight: the same capabilities that made Claude dangerous also made it essential in stopping the attack. This is a defining aspect of the AI era—AI-empowered cyberthreats require AI-empowered cybersecurity tools to detect, identify, block, and prevent them.
When implemented and managed effectively and securely, the use of AI can supercharge SOC automation, vulnerability management, and real-time threat analysis by carrying out complex and time-consuming operations. Many organizations are turning to autonomous resilience, employing AI that can defend against AI attackers at machine speed.
The Arms Race Accelerates
The advancement and increased use of AI is not likely to slow down anytime soon, among cybercriminals or defenders. With this in mind, AI-augmented conflict is inevitable. It is crucial for organizations to foster the ongoing shift in cybersecurity strategy from building walls to building adaptive, intelligent ecosystems. If AI can now plan and execute espionage, the next frontier isn’t detection—it’s alignment.