Eight Minutes to Admin: AI Just Changed the Cloud Threat Model

Sysdig cloud security AWS

The Sysdig Threat Research Team (TRT) discovered a threat operation against an Amazon Web Services (AWS) environment on November 28th, 2025. The attacker in this incident was able to progress from initial access to administrative privileges in only eight minutes, compressing what used to take days into a much smaller timespan. In a threat landscape where privilege escalation is highly prized, “time to admin” has become the new benchmark for cloud security failure.

Initial Access Is Still Boring—and Still Effective

This operation demonstrates the ongoing importance of reliable ways that attackers can gain access to target systems in cyberattacks, even as technology and techniques evolve over time. While threat actors are always attempting to innovate their tactics to increase success and payouts, they also continue to rely on the tried-and-true methods that work for them.

The use of public Simple Storage Service (S3) buckets to obtain credentials is an evergreen attack vector. Threat actors frequently take advantage of public code repositories with weak and misconfigured settings to gain access to AWS keys. Exposed credentials remain the weakest link in cloud environments, as threat actors can gain access to sensitive systems and compromise accounts in order to escalate privileges without technical exploits or hacking.

AI as the Acceleration Layer

The ongoing popularity of AI usage creates even more opportunities for attackers to speed up and improve their attacks. In this incident, the Sysdig TRT found evidence of LLM usage for reconnaissance, scripting, and error handling. The speed of the script writing, the use of comments and comprehensive exception handling, and patterns consistent with AI hallucinations all point toward AI use.

AI-generated malicious code reveals that attacker workflows are highly automated, enabling large-scale attacks with little human involvement in the process. The defining advantage in this operation was speed rather than stealth, as attackers were able to achieve administrative access in only eight minutes. The incredible speed granted by using AI makes it easier for bad actors to evade security measures and potentially launch more attacks with less investment of time and resources.

Lambda Injection and Lateral Movement at Machine Speed

The threat actors in this attack abused serverless functions and Lambda function code injection in order to achieve privilege escalation to the administrative level. They moved across 19 AWS principals without the pauses that would occur in entirely human-driven threats. Identity sprawl hinders visibility and governance across complex and multi-cloud environments, making the space an attacker’s playground.

These systems are often misconfigured and under-monitored, leaving them open for attackers to carry out malicious activity with a low chance of being detected or blocked. “When service accounts, Lambda execution roles, or AI-related identities hold broad privileges, they become high-value targets,” says Shane Barney, Chief Information Security Officer at Keeper Security. “Once compromised, they enable attackers to escalate, move laterally, and persist without tripping traditional alerts.”

From Intrusion to Monetization: LLMjacking and GPU Abuse

The threat actor exploited Amazon Bedrock for unauthorized usage of LLMs, known as LLMjacking, after finding traces of AI usage in the targeted account and verifying that model invocation logging was disabled on the account. They then invoked several AI models, sometimes leveraging cross-Region inference to improve model performance.

The attacker then shifted to launching GPU instances, beginning with five failed attempts to launch a P5 instance before switching to a lighter type. They used these GPU instances for model training and inference to enhance their attack. Cloud AI services are becoming high-value targets for attackers due to the sprawling access and massive volumes of data that they can achieve by infiltrating these systems.

Why Traditional Cloud Defenses Didn’t Stand a Chance

There is a growing imbalance between AI-driven offensive tactics and manual defense that cannot keep up with machine speed. “Detecting and defusing AI attacks in real-time demands AI-focused technology,” according to Ram Varadarajan, CEO at Acalvio. “For example, deception that targets the specific pathologies of AI-driven attackers. We know what the vulnerabilities are in AI attackers, and it's now critical that we deploy cyber defenses that are AI-aware.”

Security teams relying on human speed and traditional defenses struggle against modern threats due to issues like alert fatigue and human response latency. Static identity and access management (IAM) reviews fall short against dynamic identity abuse, like the lateral movement and privilege escalation seen in this operation.

What This Means for Cloud Security Moving Forward

In the face of modern AI-enhanced attacks, it is vital to recognize identity as the new control plane and sufficiently secure it against attacks. This operation shows how runtime visibility is increasingly taking priority over configuration snapshots in modern environments. In the future, security experts must turn to designing defenses for attackers who never slow down.

Author
  • Contributing Writer, Security Buzz
    PJ Bradley is a writer from southeast Michigan with a Bachelor's degree in history from Oakland University. She has a background in school-age care and experience tutoring college history students.