Anthropic sues US government after unprecedented national security designation

Anthropic sues US government after unprecedented national security designation

Summary

Anthropic has launched a court challenge after the US Department of Defense, styled by the administration as the ‘Department of War’, officially designated the company a supply chain risk to national security on 4 March. This is the first time a US-based AI firm has been given that classification; the designation blocks Anthropic from receiving military contracts.

CEO Dario Amodei says the decision is ‘not legally sound’ and that Anthropic has ‘no choice but to challenge it in court’. The dispute centres on Anthropic’s refusal to permit the government to remove the firm’s safety guardrails — specifically exceptions banning fully autonomous weapons and mass domestic surveillance. President Trump publicly criticised Anthropic and ordered federal departments to stop using its products.

The move follows a spat where OpenAI struck a separate deal with the Department of War and publicly disagreed with the designation, arguing its own contract offers enforceable safeguards. The Department did not comment on the story.

Key Points

  • The Department of Defense designated Anthropic a supply chain national security risk on 4 March 2026, the first such designation for a US AI company.
  • The designation effectively bars Anthropic from obtaining military contracts and is likely to be challenged in court.
  • Anthropic refused requests to remove safety constraints that would permit fully autonomous weapons and mass domestic surveillance.
  • CEO Dario Amodei called the designation legally unsound and said the company wants to protect its safety red lines while still collaborating with government where possible.
  • President Trump publicly attacked Anthropic and ordered federal agencies to stop using its products, escalating the dispute.
  • OpenAI struck a deal with the Department and publicly disagreed with the designation, citing enforceability and stronger safeguards in its own agreement.

Why should I read this?

Because this isn’t business-as-usual — it’s the first time a US AI firm has been blacklisted on national security grounds, and it kicks off a legal and policy fight that could reshape how government and AI labs cooperate. If you care about AI safety, procurement, or who gets to decide how powerful models are used, this matters right now.

Context and relevance

The case sits at the intersection of AI ethics, national security and procurement. It highlights a growing rift between companies seeking to enforce safety commitments and a government push for operational access. The outcome will influence future contracts, industry self-regulation, and whether safety guardrails can be contractual limits rather than overridden by state actors.

For the AI industry, this could set a precedent on whether firms can legally refuse requests that would make their models usable for fully autonomous weapons or mass surveillance. For policymakers, it raises questions about how to balance national security needs with private-sector safety commitments.

Author’s note

Punchy: This is a significant development — expect litigation, political theatre and ripple effects across AI procurement and safety debates.

Source

Source: https://go.theregister.com/feed/www.theregister.com/2026/03/06/anthropic_left_with_no_other/