Developers using the popular AI-powered code editor Cursor may be exposing themselves to silent attacks the moment they open a project. Oasis Security uncovered a critical vulnerability that, unlike the typical IDE flaws rooted in software bugs or unsafe plugins, comes from Cursor’s own default trust settings.
How the Flaw Works
The problem lies in how Cursor handles Workspace Trust. With the feature off by default, the editor won’t ask whether a newly opened project should be trusted. It simply accepts the contents of the repository as safe.
Attackers can take advantage of this by slipping a malicious .vscode/tasks.json file into a repository. That file can be set to run the moment the folder is opened. As a result, the hidden code executes instantly—no prompt, no consent, and no indication to the developer that anything unusual has happened.
“Cursor is at the point where they’re being compared to (and increasingly targeted like) Microsoft's Visual Studio,” said Trey Ford, Chief Strategy and Trust Officer at Bugcrowd. “This is a cause for a high-five, and a reckoning to further harden and expand enterprise security capabilities.”
What’s at Stake
A developer’s machine can be a treasure chest of sensitive assets: cloud keys, personal access tokens, and active SaaS sessions that attackers are eager to loot. That makes a poisoned repository more than an isolated nuisance. Once code executes on a laptop, it can be used to grab credentials and pivot into CI/CD pipelines or cloud environments. From there, attackers can reach deeper into the enterprise.
And it’s not only human accounts at stake. Many organizations rely on non-human identities—service accounts and automation tools—with broad permissions. Compromise one of those, and the attack can ripple through the organization.
Who’s Affected (and Who’s Not)
The exposure is squarely on Cursor users who haven’t changed the defaults. With Workspace Trust off, any repo they open could slip in a hidden task and run code on their machine.
Visual Studio Code users face a lower risk. Microsoft ships VS Code with Workspace Trust enabled by default, which means suspicious tasks trigger a prompt before they can run. Unless a user deliberately lowers that setting, they’re better protected out of the box.
What Users Should Do Now
Cursor says users can turn on Workspace Trust manually. The company has promised updated guidance, though it hasn’t yet rolled out changes to how new installations behave.
In the meantime, Oasis urges developers to take matters into their own hands. Enabling Workspace Trust is the first step. Teams should also review their environments for suspicious .vscode/tasks.json files, disable automatic tasks where possible, and open untrusted projects inside sandboxes or disposable VMs. These precautions add steps, but they shut down the silent autorun pathway attackers are counting on.
The Larger Risk with AI Tools
AI-powered developer tools like Cursor promise speed and ease. They strip away friction, letting engineers move faster and experiment more. But every shortcut carries risk, especially when the defaults tilt toward convenience instead of safety.
How these tools are configured out of the box matters. Defaults shape exposure. A secure setting left on by default can block an entire class of attacks; a relaxed one can invite them.
“I think this highlights a theme we’ve seen many times before,” said Randolph Barr, Chief Information Security Officer at Cequence Security. “When products hit hypergrowth adoption (especially during COVID), ‘secure by default’ often gets sacrificed for speed. Cursor is going through the same rapid iteration cycles we saw with other tools back then, and unfortunately, it means repeating mistakes that more mature companies have already learned from.”
The Cursor autorun flaw is the latest reminder of the trade-off. Developers prize smooth onboarding, while attackers thrive on weak guardrails. Balancing those forces—convenience on one side, control on the other—is the real challenge as AI-driven platforms race to capture market share.