Ex-CISA head thinks AI might fix code so fast we won’t need security teams
Summary
Former CISA director Jen Easterly told an AuditBoard conference that many breaches are symptoms of poor software quality rather than an unavoidable cybersecurity problem. She argued that vendors prioritise speed and cost over safety, leaving widespread, long-standing vulnerabilities in shipped software. Easterly suggested AI could be a turning point: better at finding and fixing flaws, reducing technical debt and making breaches rare anomalies rather than an accepted cost of doing business. She also backed secure-by-design principles and the White House AI Action Plan’s emphasis on security.
Key Points
- Jen Easterly says the root issue is software quality — not an inherent unsolvable cybersecurity crisis.
- Attackers exploit old, common vulnerabilities (XSS, SQL injection, memory-safety issues) rather than exotic tools.
- AI is making attackers more capable (stealthier malware, hyper-personalised phishing) but can also help defenders identify and fix code flaws faster.
- CISA has an AI action plan; the White House AI Action Plan emphasises secure-by-design for AI systems and software.
- Easterly believes properly governed AI could shift the balance to defenders and greatly reduce breaches.
- She urges demanding more security from software vendors to materially drive down risk across the supply chain.
Content summary
Speaking at AuditBoard’s user conference, Easterly argued that the explosion of devices, platforms and data expanded the attack surface, but that most successful attacks rely on well-known software flaws. She criticised industry incentives that prioritise rapid delivery and low cost over secure code, leaving a “rickety mess” of patched, fragile infrastructure.
Easterly acknowledged AI’s dual role — empowering attackers while also enabling defenders to detect vulnerabilities and remediate technical debt at scale. She suggested if AI is built, deployed and governed securely, it could make security breaches exceptional rather than routine. She also urged clearer framing of attacker capabilities (demystifying terms like APT) and pressing vendors to accept more responsibility for shipped software security.
Context and relevance
This piece matters because it frames cybersecurity as a software-quality issue and places AI at the centre of the potential solution. For security teams, vendors and procurement leads, the implications are practical: push for secure-by-design, treat vendor risk as front-line risk, and plan for AI-driven tools to change how vulnerability discovery and remediation are done.
The article sits amid growing policy attention to AI governance and multiple government pushes for memory-safe languages and secure development practices. Easterly’s comments reinforce a trend where risk management will increasingly focus on development standards, supplier obligations and AI-enabled code hygiene.
Why should I read this?
Because it’s Jen Easterly — ex-head of CISA — saying the problem isn’t mysterious hackers but sloppy software, and that AI might actually fix it. If you work with software, buy it, defend it or regulate it, this is the kind of big-picture take that should change how you budget, hire and hold suppliers to account. Short version: less drama about glam hacker names, more pressure on vendors and a very real chance AI will change how we stop breaches. Worth five minutes.
