
Tenable’s latest research highlights a growing concern in cybersecurity: the role of AI in developing malicious code. Researchers tested DeepSeek, a generative AI model, to see whether it could create malware. With the right prompts—specifically jailbreak techniques and chain-of-thought (CoT) prompting—DeepSeek was able to generate keyloggers and ransomware. However, the output wasn’t ready for deployment. The generated code contained bugs and required human intervention to function properly.
This research adds to a broader conversation about AI’s role in cyber threats. While AI-generated malware still depends on human expertise, the fact that models like DeepSeek can produce even semi-functional attack code raises alarms. Security professionals are already preparing for the next wave of AI-assisted cyberattacks, knowing that as these tools evolve, the gap between concept and execution may continue to shrink.
The Experiment
Tenable researchers’ first challenge was getting past the model’s built-in safety measures. Like most generative AI systems, DeepSeek is designed to block harmful requests. To work around this, researchers used carefully crafted jailbreak prompts that tricked the AI into bypassing its restrictions. By framing the request in a way that avoided direct red flags, they were able to extract code that would otherwise be blocked.
Beyond jailbreaks, researchers also applied CoT prompting, a technique that encourages AI to break down tasks into logical steps. Instead of asking DeepSeek to generate an entire keylogger or ransomware script at once, they guided it through a structured reasoning process. This approach improved the AI’s ability to generate detailed, step-by-step malware instructions.
Developing a Keylogger with DeepSeek
When prompted to create a keylogger, DeepSeek began by outlining the key components needed: a method to capture keystrokes, a way to store them discreetly, and techniques to keep the program hidden from detection. The AI then generated a C++ script designed to record user inputs and save them in a concealed, encrypted file.
At first glance, the code appeared functional, but it quickly became clear that it wasn’t production-ready. DeepSeek’s script contained multiple flaws, including syntax errors and missing dependencies. Unlike a human programmer, the AI couldn’t troubleshoot or refine its own output. When asked to fix the bugs, it often produced inconsistent or contradictory revisions.
This is where human expertise came into play. Researchers manually debugged the code, resolving errors and filling in gaps where DeepSeek fell short. With these adjustments, the keylogger became fully operational. It could now record keystrokes, encrypt them, and store them in a hidden file. While the AI laid the foundation, it was clear that a knowledgeable attacker would still be needed to refine and deploy the malware successfully.
Crafting Ransomware via DeepSeek
When tasked with creating ransomware, DeepSeek outlined the essential components: file enumeration to identify target data, encryption to lock files, and persistence mechanisms to maintain control over the infected system. The AI then generated a script incorporating these elements, complete with a ransom note demanding payment from the victim.
Again, the generated code had major flaws. DeepSeek produced an encryption routine, but it was incomplete, leaving gaps in how files were locked and decrypted. The persistence mechanisms were similarly flawed, lacking the ability to maintain long-term control over a compromised system. Most significantly, the script wouldn’t compile without manual corrections.
Researchers stepped in once more to troubleshoot the code, rewriting sections to make the ransomware functional. They also refined the ransom note, which DeepSeek had generated in a rudimentary form. With human intervention, the malware became capable of encrypting files and displaying a ransom message, proving that AI could assist in malware development but not without skilled attackers bridging the gaps.
Implications and Expert Commentary
Tenable’s findings highlight both the potential and the limitations of AI-generated malware. DeepSeek was able to produce keyloggers and ransomware with structured reasoning, but the output wasn’t functional without human intervention. The AI lacked the ability to self-correct errors, refine attack techniques, or create fully operational malware on its own.
However, the trajectory is clear. As AI models improve, the level of human involvement needed to weaponize AI-generated code will likely decrease. Casey Ellis, founder of Bugcrowd, warns that even though AI-generated malware isn’t yet plug-and-play, security teams need to prepare for what’s coming. “The fact that these systems can produce even semi-functional malicious code is a clear signal that security teams need to adapt their strategies to account for this emerging threat vector,” he said.
One of the biggest concerns is the dual-use nature of generative AI. These models are designed for legitimate coding assistance, yet they can be manipulated to generate harmful content. Ellis suggests that companies should strengthen guardrails, implement stricter input validation, and educate users about the risks of generative AI.
Meanwhile, defenders are being forced to rethink traditional security strategies. AI-generated malware, even when imperfect, is more likely to evade signature-based detection. “Security teams need to implement advanced behavioral analytics that can detect unusual patterns in code execution and network traffic,” said J. Stephen Kowski, Field CTO at SlashNext. AI-powered threat detection, zero-trust architecture, and automated response systems will be key to staying ahead of AI-assisted attacks.
AI’s Growing Role in Cybercrime
For now, AI isn't replacing human cybercriminals, but it is lowering the barrier to entry for those with technical know-how.
As generative AI continues to evolve, the gap between concept and execution will shrink. What took skilled attackers hours or days to code may soon be produced in minutes with minimal corrections. The accessibility of these tools means more people—including those with little experience—could experiment with AI-assisted cyberattacks, making malware development faster and more widespread.
Security experts agree that the time to act is now. Cybersecurity teams, researchers, and policymakers need to work together to stay ahead of the curve. At the same time, AI developers must take responsibility for preventing misuse by strengthening safeguards and monitoring how their models are used.