Ransomware has long been a costly headache for individuals and businesses alike. But a new study from New York University’s Tandon School of Engineering suggests the threat may be entering a dangerous new phase, one where artificial intelligence does all the dirty work.
Researchers at Tandon have demonstrated that large language models (LLMs), the same type of AI that powers popular chatbots, can be harnessed to execute ransomware attacks autonomously. Their proof-of-concept, called “Ransomware 3.0,” shows how criminals could one day bypass the need for human hackers and let AI run attacks from start to finish.
How the AI-driven ransomware works
Unlike traditional ransomware, which depends on pre-written malicious code, this prototype relies on embedded text instructions. When activated, the malware connects to an AI model that generates customized scripts for each victim’s device. That means every attack looks different, making it much harder for security systems to detect.
The team tested their system across multiple environments, from personal laptops to industrial control systems. In each case, the AI successfully mapped networks, identified sensitive files, and generated ransom notes designed to pressure victims by referencing their own data. Researchers noted that the scripts worked seamlessly across Windows, Linux, and even embedded devices like Raspberry Pi boards.
Discovery and mistaken alarm
The prototype first made headlines when cybersecurity firm ESET stumbled across it on VirusTotal, a platform used to test suspicious files. Believing they had found the world’s first real AI-powered ransomware, analysts quickly raised alarms. In reality, the file was a research upload from the NYU team, not a live threat.
Still, the confusion underscores how convincing the AI-generated malware already appears. According to lead author Md Raz, the mix-up highlights “just how seriously we must take AI-enabled threats.” Even trained experts initially mistook the controlled experiment for an in-the-wild cyberattack.
Why this matters for cybersecurity
The economics of ransomware could shift dramatically if AI-powered attacks become mainstream. Traditional operations demand skilled programmers and infrastructure investment. By contrast, the NYU team’s prototype required about 23,000 AI tokens to execute an entire attack—roughly $0.70 using commercial AI services. And with open-source models, that cost drops to nearly zero.
Lowering the barrier to entry could open the door for less experienced criminals to run advanced campaigns. On top of that, AI can personalize extortion messages, increasing the psychological pressure on victims far beyond the generic ransom notes of old.
Experts suggest this adaptability makes AI-driven ransomware especially dangerous. Because each attack generates unique code, traditional defenses that rely on spotting known signatures may prove ineffective.
What comes next
The NYU researchers emphasize that their work followed strict ethical guidelines and was confined to a controlled lab environment. Their findings, published on arXiv, aim to give cybersecurity professionals an early warning before malicious actors adopt similar techniques.
Among their recommendations: closely monitor sensitive file access patterns, restrict unnecessary AI service connections, and build new detection tools designed specifically for AI-generated threats.
As the line between legitimate AI use and criminal exploitation blurs, one thing is clear—the race between attackers and defenders just got a lot more complicated.