Rapid AI-driven development makes security unattainable, warns Veracode

Rapid AI-driven development makes security unattainable, warns Veracode

Summary

Veracode’s 2026 State of Software Security report, analysing 1.6 million applications tested on its cloud platform, finds that more vulnerabilities are being created than fixed. Security debt — known vulnerabilities unresolved for over a year — affects 82% of organisations (up from 74% last year), while high‑risk vulnerabilities rose from 8.3% to 11.3%. The study combines static, dynamic and software composition analysis plus manual penetration testing.

Some positives: apps with open‑source vulnerabilities fell from 70% to 62% and overall flaw prevalence edged down from 80% to 78%. Veracode cites wider use of testing tools (which uncovers more issues) but points to accelerating release velocity and growing complexity — including AI‑generated code — as drivers widening the remediation gap. The report says AI can both help and hinder: it can identify vulnerabilities and automate fixes, yet also create false positives, introduce new flaws and enable attackers through techniques like prompt injection. Veracode concludes that the velocity of AI‑era development makes comprehensive security unattainable without transformational change.

Key Points

  1. Analysis covers 1.6 million applications using static, dynamic, composition analysis and pen testing.
  2. Security debt now affects 82% of organisations, up from 74% a year earlier.
  3. High‑risk vulnerabilities increased from 8.3% to 11.3% year‑on‑year.
  4. Open‑source vulnerabilities decreased from 70% to 62%; overall flaw prevalence fell slightly (80% → 78%).
  5. Rapid release cycles and AI‑generated code add complexity and widen the remediation gap.
  6. AI is double‑edged: it can find and sometimes fix issues but also produces false positives and new attack surfaces.
  7. Veracode calls for transformational change — incremental improvements are insufficient to close the crisis.

Context and Relevance

This report is important for developers, security teams and engineering leaders. It quantifies a growing mismatch: detection is improving (so more issues are found) while fast, AI‑assisted development pumps out new code faster than teams can remediate. The result is rising technical and security debt, higher operational risk and a heavier burden on human reviewers who must triage noisy AI outputs. The findings tie into broader industry trends — rapid AI adoption, automated code generation, and the need to rethink governance, testing and remediation workflows rather than relying on incremental tooling additions.

Why should I read this?

Look — if your team is using AI to speed up coding but hasn’t got a plan to fix the mess it creates, this report is a wake‑up call. It shows you where the holes are, why they’re getting bigger, and why slapping on another scanner won’t cut it. Read it to find leverage points for real change (and to start arguing for the resources you actually need).

Source

Source: https://www.theregister.com/2026/02/26/veracode_security_ai/