Nearly a decade after the WannaCry attacks disrupted businesses worldwide, ransomware continues to remain a major enterprise threat. However, the fundamental mechanics behind most cyber breaches remain largely unchanged despite the rise of AI-driven tools.
Most of the discussion this year has been about LLMs on the offensive side. Some of it is warranted. A lot of it is overstated. The broader reality is that large language models have changed the production cost and polish of attacks more than they’ve changed the attack surface. That distinction matters when you’re deciding what to prioritise.
The 2026 threat landscape in numbers
Disclosed ransomware incidents rose from roughly 4,900 in 2024 to over 7,200 in 2025. Average ransom payments fell over the same period, which is the pattern you’d expect when extortion shifts from “pay to decrypt” to “pay to suppress.”. Pure encryption is no longer the primary objective. Exfiltration appears in roughly three-quarters of cases, often before the encryption stage runs at all, and in a growing share of incidents, the file-locking step is skipped entirely.
Ransomware-as-a-Service still drives most of the volume. Qilin, Akira and LockBit5 are the names showing up most consistently in the public incident-response assessments. The notable structural shift is geographic: a meaningful share of new affiliates and operators are based outside the traditional CIS footprint, which has implications for both takedown strategy and sanctions exposure when ransoms are paid.
Offensive use of LLMs
It helps to be specific here because the discourse drifts toward science fiction quickly. Four concrete shifts are worth flagging.
Social engineering quality. This is the most consistent and most important change. Phishing in non-English languages used to be a clear indicator. It isn’t anymore. Models produce fluent, idiomatic lures across most major business languages, mimic an executive’s writing style from public sources, and reference real internal projects scraped from LinkedIn, GitHub or vendor case studies. Detection programmes that still rely on linguistic anomaly becoming less effective.
Reconnaissance throughput. Target profiling that previously took an affiliate several days now runs as a batch job. Job boards, code repos, vendor pages, breach dumps and exposed admin interfaces all get cross-referenced automatically. The output isn’t qualitatively new, but the volume and speed are.
Code quality at the low end. Samples like Hive0163’s Slopoly loader carry visible LLM artefacts: consistent naming, structured error handling, and inline comments that no operator writing under time pressure would bother with. The takeaway isn’t that elite malware is improving. The takeaway is that the floor has risen, which expands the pool of operators capable of fielding functional tooling.
Runtime-generated logic. Public research from NYU Tandon (Ransomware 3.0, August 2025) and reported samples like MalTerminal, PromptLock and LameHug all share a similar architecture: the binary calls out to a model to generate behaviour at runtime rather than carrying it statically. This complicates signature-based detection in obvious ways. It also creates new detection opportunities, since the egress traffic to model endpoints is itself a signal if you’re looking for it.
The strategic consequence of all four is what some analysts have started calling tailored extortion. An LLM-driven attack can survey the filesystem, identify the most sensitive content (regulatory exposure, sealed legal matters, internal HR material) and price the demand against that specific leverage. The flat-number ransom note is being replaced by something more uncomfortable to negotiate against.
Defensive use of LLMs
Defenders have a structurally better position here. Production-grade models, hosted by providers with reasonable safety tuning, are available without any jailbreaking required. The integrations that have actually moved metrics in the SOCs we’ve seen:
Alert triage. LLM-assisted summarisation of alert chains, initial investigation drafting, and routing decisions. CrowdStrike’s 2026 data puts AI-assisted teams at roughly 10x faster time-to-investigation, with measurably fewer fatigue-driven misses on shift handovers. The directional finding (significant speedup, modest accuracy gain) matches what we’re hearing anecdotally from teams running their own pilots.
Reverse engineering acceleration. Walking obfuscated code with a model in the loop is genuinely faster than working it alone, particularly for analysts who are competent but not specialists in malware analysis. IOC extraction that used to take half a day is sometimes down to under an hour. The caveat is that the model will confabulate plausible-sounding behaviour for code it doesn’t understand, so verification discipline matters.
Behavioural phishing detection. Since linguistic tells are disappearing, detection has had to move to behavioural and relational signals: unusual sender-recipient pairs, off-pattern request structures, link reputation deltas, and timing anomalies. Most of the mature email security vendors have shifted in this direction, and the in-house detections we’re seeing built on top of them have followed.
Tabletops and awareness content. Less glamorous, but probably the highest-leverage use for under-resourced teams. Generating realistic scenarios, regulator walkthroughs, and audience-specific awareness material at the quality bar of a dedicated training function, for the cost of a few hours of analyst time, is a real shift in what smaller security programmes can deliver.
“The biggest lesson from recent ransomware trends is that the fundamentals still matter most. Stolen credentials, unpatched edge devices, and misconfigured accounts continue to drive the majority of successful breaches. While AI is accelerating phishing and post-breach movement, it is not replacing the core weaknesses attackers have exploited for years. What remains especially concerning is the sharp rise in edge device exploitation, at a time when organisations are still taking weeks to apply critical patches,” said Kunal Mahar, Head – Security Operations at 5Tattva.
Priorities for security teams
If Anti-Ransomware Day is a prompt to revisit something, the candidates are unsurprising but worth restating:
- MFA coverage across remote access and privileged accounts, with phishing-resistant factors where the workload supports them.
- Edge device patch cadence. The 30-day median is too long. Track it as a metric, not an aspiration.
- Backup integrity and restore testing. Untested backups are not a recovery plan. Assume exfiltration occurred even when backups look clean, because in three quarters of cases it did.
- Tabletop scenarios refreshed for the current threat picture. An LLM-fluent phishing chain, a voice-cloned finance request, and a tailored extortion demand belong in the current rotation.
- Your own AI surface. Internal LLM deployments and agentic systems are now in scope as targets, with prompt injection, retrieval poisoning and over-permissioned tool access being the dominant patterns. If you haven’t inventoried what your internal models can read and act on, that’s a good Anti-Ransomware Day exercise.
Outlook
The honest summary is that 2026 looks like 2025 with better production values on the attacker side and meaningful efficiency gains on the defender side. The fundamentals that decided ransomware outcomes a decade ago are still deciding them now. LLMs change the velocity and polish of attacks more than they change the underlying playbook, which is both reassuring (the controls that worked still work) and a problem (the gaps that existed are still being exploited).
The teams that come out of this year in good shape will be the ones that resist treating AI as either a panacea or a crisis, and instead use it where it earns its keep, which is mostly in triage, analysis acceleration, and content generation for the work nobody had time for before. The underlying job hasn’t changed much. The tooling for doing it has.
Ultimately, organisations that combine operational discipline with faster response capabilities will be better positioned to manage ransomware risks in the AI era


