Overconfidence is the new zero-day as teams stumble through cyber simulations
Summary
New data from Immersive’s Cyber Workforce Benchmark shows a worrying gap between belief and ability across cybersecurity teams. Despite 94% of organisations saying they can detect, respond to and recover from a major incident, real-world simulation performance is poor and stagnant since 2023. Median completion times for critical threat intelligence labs remain at 17 days, tabletop and crisis drills show teams average just 22% accuracy, and simulated containment often takes more than a day.
Key Points
- 94% of organisations report they can effectively handle a major incident, yet exercises reveal only 22% accuracy under pressure.
- Median response time for critical cyber threat intelligence labs is 17 days, unchanged since 2023.
- In immersive crisis simulations, participants averaged 60% confidence but took a median of 29 hours to contain an infection.
- 60% of training focuses on vulnerabilities older than two years, leaving teams over-prepared for yesterday’s threats and underprepared for novel or AI-enabled attacks.
- Only 41% of organisations include non-technical roles (legal, HR, comms, executives) in simulations, despite 90% believing cross-functional communication is effective.
- Organisations over-rely on completion rates as a proxy for readiness; fewer than half use resilience scores or measure simulation frequency.
- Experienced practitioners do well on familiar scenarios (≈80% accuracy) but struggle with novel threats; senior participation in AI scenario labs has dropped year-on-year.
Context and relevance
This report matters because it exposes a systemic illusion: confidence without evidence. Boards, regulators and cyber insurers demand resilience, and many organisations have invested heavily — yet core readiness metrics are flat or worse. As attackers adopt AI and novel techniques, rehearsing only historical threats creates a dangerous lag in defences. The findings underline a broader industry shift: measuring activity (completions) instead of capability (proven outcomes) produces a false sense of security.
For security leaders and risk teams, the takeaway is clear: broaden simulations to include business functions, update scenarios to reflect current attacker TTPs, measure resilience with meaningful scores, and prioritise regular high‑pressure exercises that test coordination as much as technical skill.
Why should I read this?
Look — if you think your org’s ready because a bunch of staff clicked through training modules, this is the slap in the face you need. The piece cuts to the chase: too much old-school practise, too little real proof. Read it if you want to stop pretending and start actually getting better when it matters.
Author take
Punchy and plain: confidence isn’t competence. Immersive’s data is a wake-up call — not because teams lack effort, but because they’re rehearsing the wrong fights and using the wrong metrics. If you care about real incident readiness, this report should force a rethink of how you train, who you involve and how you measure success.
