Blog

  • Cybercrime & LLMs

    Cybercrime & LLMs

    Ransomware has always been the cybercriminal’s version of a home business: low overhead, high returns, and no pesky taxes. But thanks to generative AI, it’s apparently leveling up from “basement script kiddie” to “Silicon Valley startup with venture funding.”

    Anthropic just dropped a report showing that criminals are using its own AI, Claude, not just to draft scarier ransom notes, but to actually help build and distribute malware. Yes, the same AI designed to politely refuse your request for NSFW fanfiction is now moonlighting as a cybercrime consultant. To their credit, Anthropic says they’ve banned the account tied to one UK-based ransomware peddler (codename GTG-5004) and are rolling out YARA rules to stop their AI from spitting out weaponized code. In other words: “We put a lock on the front door, but don’t mind the wide-open windows.”

    Meanwhile, ESET researchers unveiled PromptLock, a proof-of-concept ransomware that uses a local AI model to write malicious Lua scripts on the fly. Imagine ChatGPT, but instead of suggesting dinner recipes, it encrypts your family photos and demands $500 in Bitcoin. Charming. While PromptLock hasn’t been unleashed in the wild yet, it’s a proof that cybercriminals are experimenting with local models—because nothing says innovation like cutting out the cloud middleman in your extortion scheme.

    The most unnerving part? AI isn’t just giving attackers better grammar; it’s making non-technical criminals dangerous. GTG-5004, for example, apparently couldn’t implement basic encryption on their own—Claude held their hand the whole way. It’s essentially the Duolingo of cybercrime: “Today you learned how to exfiltrate sensitive data! Congratulations, you’re 10 XP closer to becoming a ransomware kingpin.”

    Allan Liska from Recorded Future summed it up: most ransomware crews aren’t going full AI yet, but they’re happily using it to get in the door. Think of it as the LinkedIn recruiter phase of the ransomware process: AI writes the phishing email that gets them access, then the humans take it from there.

    The bottom line: ransomware has always been profitable, but AI is making it scalable. Criminals can automate the boring parts—finding targets, writing ransom notes, developing basic malware—and save their human ingenuity for more important things, like arguing on cybercrime forums about whether $400 or $1,200 is a fair price for your neighborhood’s encryption-as-a-service package.

    So yes, we’re officially in the era where the biggest danger of generative AI isn’t that it makes teenagers cheat on their essays—it’s that it’s helping criminals run more efficient startups than half of Silicon Valley.