Deepfake Awareness High at Orgs, But Cyber Defences Badly Lag

Deepfake Awareness High at Orgs, But Cyber Defences Badly Lag

Summary

Organisations are increasingly encountering AI-augmented deepfake attacks — from AI-crafted phishing emails to audio and video impersonations — yet many have not invested in technical defences. Recent research cited by the article (including reports from OpenAI, Ironscales and CrowdStrike) shows wide exposure: around 85% of mid-sized firms reported deepfake or AI-voice fraud attempts and more than half suffered financial losses. While training and awareness have improved, investment in detection tools remains inadequate, leaving companies vulnerable as attackers scale up AI-enhanced campaigns.

Key Points

  • 85% of mid-sized organisations have seen deepfake or AI-voice fraud attempts; 55% reported financial loss (Ironscales).
  • Average reported loss for affected organisations (median-adjusted) is about $167,000 in the past 12 months.
  • Static deepfake images and AI-augmented BEC attacks are the most common techniques; audio deepfakes are on track to double in 2025 (CrowdStrike).
  • Awareness training rose from 68% in 2024 to 88% in the past year, but two-thirds of organisations have not invested in specific defences for AI-augmented threats.
  • Attackers use LLMs, human digital twins and voice-cloning to create highly convincing, targeted lures that are hard for humans to spot.
  • Experts recommend layered defences: stronger processes (multi-step authorisations), employee training, and AI-augmented detection tools to catch threats before they reach staff.

Context and Relevance

This article sits at the intersection of two accelerating trends: widespread availability of generative AI that lowers the skill barrier for attackers, and a lagging defensive posture among many organisations. For CISOs, risk managers and IT leaders, the piece underlines a growing asymmetry — attackers rapidly adopt AI to scale and perfect fraud, while many defenders still rely on awareness alone rather than investing in detection and automation. The findings are relevant to any organisation handling wire transfers, payroll, or sensitive communications, and feed into broader concerns about trust, identity and fraud prevention in the age of generative AI.

Why should I read this?

Short version: attackers are using AI to make scams look and sound real, and a lot of organisations are overconfident. If you manage risk, run finance, or look after security, this is the sort of gap you need to know about now — so you can stop being the organisation that learns the hard way. Reading this saves you time by highlighting the practical gaps (training up, policy fixes, and where to spend on detection) without wading through full reports yourself.

Source

Source: https://www.darkreading.com/cybersecurity-operations/deepfake-awareness-high-cyber-defenses-lag