New malware uses AI to adapt during attacks, report finds
Summary
Researchers have observed a new wave of malware that uses large language models (LLMs) during execution to dynamically change behaviour and evade detection. Google reported seeing experimental strains such as PROMPTFLUX, a dropper that prompts an LLM to rewrite its own source code, and PROMPTSTEAL, used by Russia-linked APT28 to generate commands on the fly during live operations.
The research characterises this as a significant step towards more autonomous, adaptive malware. While PROMPTFLUX appears to be in testing and has been disrupted by Google, PROMPTSTEAL demonstrates that threat actors are beginning to query LLMs mid-attack rather than relying on hard-coded commands. The report also notes a growing underground marketplace offering purpose-built AI tools for criminal use, lowering the skill barrier for lower-tier actors.
Key Points
- State-backed actors were observed using malware that queries LLMs during execution to generate or alter malicious code and commands.
- PROMPTFLUX is an experimental dropper that prompts an LLM to rewrite its source to evade detection; it is reportedly in a testing phase and has been disrupted.
- PROMPTSTEAL was used by APT28 to generate commands via an LLM in live operations — Google’s first observation of this in an active intrusion.
- Researchers warn this trend signals a move towards more autonomous and adaptive malware capable of altering tactics mid-attack.
- An expanding underground market for AI tools is making advanced capabilities accessible to less-skilled criminals, increasing overall threat surface.
Context and Relevance
This development matters because it changes the attack model defenders must anticipate. Malware that can query LLMs at runtime can vary its payloads, evade signature-based detection, and adapt to defensive measures. It raises questions about how to monitor and defend LLM endpoints, how to detect AI-driven code mutations, and how to secure supply chains and tooling used by both red and blue teams.
For organisations and security teams, the trend emphasises the need to update detection strategies, combine behavioural analytics with AI-aware telemetry, and consider policies around access to LLMs and monitoring of suspicious queries. It also highlights an emerging criminal economy selling AI-powered tooling that lowers the barrier to entry for complex attacks.
Why should I read this?
Short and plain: if you look after security, networks, or data, this is exactly the kind of twist you need to know about. Malware that talks to LLMs mid-attack changes how attacks evolve — and how you need to spot them. Read it so you can start asking the right questions about detection, logging and locking down AI endpoints before someone else tests it on your kit.
Author style
Punchy. This story is more than a tech curiosity — it’s a warning. If you care about defensive strategy or risk management, the details here matter. Skimming won’t cut it: the shift from AI-as-tool to AI-in-runtime could materially change incident response and detection playbooks.
Source
Source: https://therecord.media/new-malware-uses-ai-to-adapt
