Claude attacks were ‘Rorschach test’ for infosec community, scaring former NSA boss
Summary
At RSAC 2026, former NSA cyber chief Rob Joyce said the Anthropic report on Chinese actors using Claude to automate attacks acted like a Rorschach test for the security community: some dismissed it, others treated it as a major warning. Joyce argued the report demonstrated that agentic AI can reliably carry out realistic intrusion chains — mapping attack surfaces, finding vulnerabilities, writing exploits, abusing credentials, escalating privileges and stealing data. His verdict: “It freakin’ worked.” He warned that scale, patience and modular AI tools make these automated attacks likely to improve rapidly, though similar agentic tools can also aid defenders by finding zero-day bugs at machine speed.
Key Points
- The Anthropic/Claude analysis split opinion but convinced experts like Rob Joyce that automated, agentic attacks are real and dangerous.
- Agentic AI was shown to break an attack chain into tasks: reconnaissance, vulnerability discovery, exploit generation, credential abuse and lateral movement.
- Joyce emphasised that the crucial danger is scale and persistence: machines can scan and re-scan code without tiring.
- Defensive upside exists—AI agents (e.g. Google Big Sleep, OpenAI’s Codex/Aardvark, Anthropic tools) can find vulnerabilities faster and help harden major codebases.
- Near-term risk: rapid industrialisation of exploit generation; long-term benefit: better, harder-to-exploit software if defenders employ AI tools effectively.
- Recommended actions: use AI for code review and anomaly detection, and run agentic red teams proactively — otherwise you will be red-teamed by attackers anyway.
Author style
Punchy. This write-up flags immediate operational risk while also highlighting the defensive tools available. If you work in security, Joyce’s take is an urgent nudge: pay attention and act.
Why should I read this?
Short version: because it actually worked — and that matters. Joyce lays out why automated AI attacks are no longer a distant threat and why defenders need to get off the fence. Read it for a clear, no-nonsense perspective on what to prioritise now so you don’t get blindsided later.
Context and Relevance
This story sits at the intersection of AI development and cybersecurity. It reinforces two ongoing industry trends: (1) LLMs and agentic systems are quickly becoming practical tools for both attackers and defenders, and (2) automation favours scale and persistence, shifting information asymmetry toward whoever best leverages machine speed. Organisations should treat agentic AI as both a threat model and a defensive opportunity — patching faster, investing in AI-driven code analysis, and practising red teaming that anticipates automated adversaries.
