All your bots are belong to US if you don’t play ball, DoD tells Anthropic
Summary
Punchy: The Pentagon has put Anthropic in a corner — threaten, blacklist or bend. The US Department of Defense (DoD) met with Anthropic’s CEO and is reportedly prepared to use the Defense Production Act (DPA), label the company a supply-chain risk, or even cancel a multi-million-dollar contract to force wider military use of Anthropic’s AI.
The dispute centres on Anthropic’s red lines: no autonomous weapon targeting decisions and no domestic surveillance of US citizens. Coinciding with the meeting, Anthropic issued Responsible Scaling Policy v3, dropping a longstanding pledge to halt training models it could not guarantee were safe, citing competitive pressures.
Key Points
- The DoD met with Anthropic leadership and pressed the company to relax safeguards on military use of its AI.
- The Pentagon warned it could compel compliance via the Defense Production Act if Anthropic refuses to agree by a stated deadline.
- The DoD may declare Anthropic a supply-chain risk, forcing government contractors to remove Anthropic software — a potentially severe commercial penalty.
- The agency could also terminate an existing contract (reported up to $200m) if terms are not accepted.
- Anthropic’s new Responsible Scaling Policy v3 removes a key pledge to stop training models it cannot guarantee as safe, citing the need to remain competitive.
- Anthropic says its red lines remain: no final targeting decisions by AI and no domestic surveillance of Americans, even if lawful.
- The Pentagon maintains it intends lawful use and that responsibility for legal use rests with the end user (the DoD).
Context and relevance
Why this matters: The clash highlights a broader tension between commercial AI firms’ safety commitments and national security imperatives. Governments around the world are increasingly assertive about ensuring access to advanced AI capabilities for defence and intelligence purposes. The DoD’s willingness to invoke the DPA or blacklist a supplier signals a hardening stance that could reshape vendor behaviour, procurement risk assessments, and industry safety norms.
For organisations tracking AI governance, supply-chain risk, or defence procurement, this story is a bellwether. It also speaks to competitive dynamics in AI: Anthropic says it dropped its training moratorium because competitors are moving faster — a reminder that safety pledges can come under commercial pressure.
Why should I read this?
Short version: because this could change who gets to decide how AI is used in war and surveillance — and who gets paid. If you’re in tech, defence, procurement or policy, this is where safety promises meet real-world leverage. It’s also a neat example of how governments can force tech companies’ hands without firing a shot.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/02/25/pentagon_threatens_anthropic/
