Meet the Gods of AI Warfare

Meet the Gods of AI Warfare

Summary

This is an excerpt from Katrina Manson’s book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare. It traces the evolution of Project Maven from a controversial Pentagon experiment into the Maven Smart System — a Palantir-built platform that fuses geospatial and sensor data with AI detections and has been widely adopted across US agencies, combatant commands and allied partners.

The piece follows key figures (notably Marine Colonel Drew Cukor and Vice Admiral Frank “Trey” Whitworth), charts internal Pentagon debate and public protest (including Google worker objections), and shows how Maven sped up targeting cycles, scaled to thousands of detections and became operationally central in regions from the Middle East to Europe and the Indo‑Pacific. The excerpt also canvasses ethical, legal and practical worries: record‑keeping and accountability, training shortfalls, AI hallucinations, hacking and the risk of normalising automated lethal decision‑making.

Key Points

  • Project Maven grew into Maven Smart System, a Palantir platform that fuses imagery, sensor feeds and AI detections for targeting and intelligence.
  • Internal Pentagon scepticism (overcost, accountability and procedural shortcuts) gave way to strong institutional uptake, led by NGA under Vice Admiral Whitworth.
  • Maven has been used operationally in multiple theatres (CENTCOM, Ukraine, Middle East) to identify and prioritise targets, accelerating the kill chain from hours to minutes.
  • Contracts and budgets ballooned: Palantir secured large Army and multi‑service deals and NATO/UK interest followed, with programme ceilings rising into the hundreds of millions and beyond.
  • AI integration extended across commands and functions — from missile and ship detection to border and narcotics policing — raising concerns about mission creep and domestic use.
  • Training and doctrine lag behind technology: many users of Maven receive little standardised instruction despite the platform’s integration with weapons pairing and effects selection.
  • Technical risks persist: model hallucinations, adversarial data manipulation, bandwidth and electronic footprint vulnerabilities were flagged by officials and technologists.
  • NGA moved to improve oversight with model assessment cards and accreditation, but policy language on human judgement remains vague about what counts as adequate human oversight.
  • Maven’s spread has cultural impact: proponents portray it as indispensable, while critics warn it normalises automation in lethal decisions and could bring overseas warfighting methods home.
  • The excerpt dramatizes personal dynamics — Cukor as a crusading founder, Whitworth as a sceptic turned evangelist — illustrating how individuals shaped the programme’s direction.

Context and Relevance

For readers following defence policy, AI ethics or tech governance, this excerpt is a timely account of how a single platform can reshape operational practice across intelligence and military systems. It shows the intersection of private tech (Palantir), government agencies (NGA, CENTCOM, NORAD) and political change — including shifts under the second Trump administration — which accelerated adoption and broadened use cases beyond traditional battlefields.

The piece is relevant to ongoing debates about autonomy, accountability and the legal frameworks that govern lethal force. It highlights pressing practical questions for policymakers and military planners: how to certify AI models, how to train personnel, how to preserve audit trails, and how to prevent mission creep into domestic policing. It also illustrates commercial and geopolitical consequences as allies acquire similar systems.

Why should I read this?

Because this is where theory meets the mess of real war. If you care about whether machines should be trusted with life‑and‑death choices, or how Silicon Valley code ends up steering military strikes, you want the inside story. It’s vivid, a bit scary, and explains why debates about AI in war aren’t abstract any more — they’re changing how decisions are made on the ground.

Author style

Punchy — Manson blends on‑the‑ground reporting, insider interviews and institutional detail to make a brisk, consequential case. If this topic matters to you, the excerpt rewards close reading: it names who built the tools, who pushed them, and what went wrong or could yet go wrong.

Source

Source: https://www.wired.com/story/project-maven-katrina-manson-book-excerpt/