
The AI arms race has begun. In its new forecast, Gartner now predicts that by 2027, AI agents will cut the time it takes to exploit exposed account credentials in half. It’s a real warning that in the not-so-distant future, cybercriminals will be able to reduce exploitation times from days or hours to just minutes, which will forever change breach detection and response.
What makes AI agents so disruptive is their ability to operate at speed and scale with shocking precision. Unlike the traditional tools threat actors relied on in the past, AI-powered systems can sift through mountains of leaked credentials, probe for active vulnerabilities, and launch highly targeted attacks – all without human intervention.
It adds up to a scenario where defenders will have an extremely tight window to detect and prevent threats before real damage is done. Facing these new threats, the good guys must react quickly to keep pace or even outmatch attackers’ capabilities and advantages.
Andrew Bolster, Senior R&D Manager at Black Duck, agreed. “Cybersecurity has always been an arms race,” he said. “Attackers attempt to apply the latest technologies to exploit new victims in innovative and interesting ways, while defenders try to stay ahead of these new threats before and as they emerge, often using the same technology. AI is the latest example, and now, both sides are attempting to use it as attackers look to identify, target, and execute threats and defenders try to detect and defend them.”
From Login to Lockout: The Rise of Automated Account Takeovers (ATO)
Account takeover (ATO) attacks have shifted from largely manual efforts of the past to AI-driven, fully automated operations today. What once required hands-on human involvement now relies on algorithms and bots to breach accounts at astonishing speed and scale.
AI is now accelerating every phase of the ATO chain. Deep-learning bots can test stolen credentials across all organization’s applications, systems, and platforms in minutes, adapting on the fly to avoid detection. More troubling, according to Nicole Carignan, Senior Vice President, Security and AI Strategy, and Field CISO at Darktrace, AI is also capable of using additional tools to improve the likelihood of success.
“The ability of attackers to use generative AI to produce deepfake videos, imagery, voice cloning, and other media is a rising concern since attackers are using these options to create sophisticated social engineering campaigns,” she said. “AI can now be used to reduce both the skill barrier to entry as well as the time to production for high-quality impersonation attacks.”
For example, in one of the earliest and most widely cited cases of deepfake audio fraud, cybercriminals used AI-generated voice cloning in 2019 to impersonate the CEO of a German energy company, reportedly convincing a senior executive to wire $243,000 to a fraudulent account. The attackers mimicked the CEO’s voice with striking accuracy – including tone, accent, and speech patterns – during a phone call that appeared routine. Believing he was following legitimate instructions, the executive transferred the funds, only to discover the deception later.
At the same time, the overall attack surface is expanding. More than just web logins, ATO threats now target apps, APIs, and voice channels such as call centers and smart assistants. As organizations continue to open more access points, attackers are finding new ways to target and exploit them, making identity security a bigger challenge than just protecting passwords.
Carignan believes this evolving scenario points to the need for defenders to embrace the same mindset. “It is now imperative to turn to AI-augmented tools for detection since humans can’t be the last line of defense,” she said.
How CISOs Can Prepare for AI-Powered Cyber Risks
As AI-powered threats continue to escalate, traditional defenses such as identity access management (IAM) and multi-factor authentication (MFA) may not be enough. Static credentials and one-time codes are hard-pressed to keep pace with adaptive AI agents capable of mimicking users, bypassing controls, and launching widespread attacks across multiple channels.
For CISOs, this means a new strategic shift is required, a new approach capable of moving beyond authentication to real-time behavioral monitoring and identity threat detection. “As attackers become more sophisticated, the need for stronger, more dynamic identity verification methods – such as MFA and biometrics – will be vital to defend against these progressively nuanced threats,” said Darren Guccione, CEO and co-founder at Keeper Security.
Security vendors are quickly developing AI-native tools to detect behavioral anomalies, flag unusual access patterns, and respond autonomously across all attack surfaces – everything from web and mobile logins to APIs, voice interfaces, and third-party integrations. These solutions are designed to catch malicious activity even when attackers use valid credentials or impersonate an executive’s voice or likeness.
To stay ahead, security teams must now adopt a layered approach. This includes investing in AI-driven platforms, deploying identity threat detection and response (ITDR) solutions, and securing executive communication channels likely to be targeted by impersonators. Additionally, training employees to spot deepfake fraud, whether through voice calls or suspicious messages, is crucial.
AI Threats Are Already Here
In an AI-powered threat landscape, speed, context, and cross-channel visibility are essential for effective defense. AI and AI agents have quickly become force multipliers for cybercriminals, enabling faster, smarter, and more convincing attacks. From real-time credential abuse to deepfake-driven deception, these threats are evolving too fast for traditional defenses to keep up.
Organizations must move just as quickly by investing in behavior-based detection, real-time response tools and AI-powered platforms built to fight fire with fire. The future of cybersecurity belongs to those who act before threats strike, not after.