Altman said no to military AI abuses – then signed Pentagon deal anyway
Summary
OpenAI CEO Sam Altman publicly defended limits on military uses of AI—saying OpenAI would not support mass surveillance or autonomous lethal weapons—but within hours the company signed a reported $200m deal with the US Department of Defense that contains no equivalent contractual protections. Sources suggest the contract allows the military broad rights to use OpenAI’s technology for any lawful purpose, including bulk data analysis. The move has provoked community backlash, fuelled claims of hypocrisy, and handed a PR and strategic win to rival Anthropic, which has seen demand and revenue surge amid the dispute.
Key Points
- Altman publicly stated red lines against mass surveillance and autonomous lethal weapons, then signed a DoD deal the same day that lacks those protections.
- The contract is reported to give the Pentagon broad authority to use OpenAI’s models for any lawful purpose, including bulk analysis of Americans’ data.
- OpenAI reportedly received about $200m for the deal amid heavy spending and funding pressures.
- Community and employee outrage followed, with calls to cancel OpenAI services and criticism on social platforms.
- Anthropic has benefited commercially from the controversy, seeing a large boost in enterprise revenue and market share.
- Altman says he’s working to improve the agreement publicly, but has told staff operational control rests with the government — a stance that may damage OpenAI’s reputation.
Context and relevance
This story sits at the intersection of AI ethics, national security contracting and market competition. It matters because private AI firms increasingly make decisions that shape how powerful models are used by governments — from surveillance to military targeting. The deal highlights tensions between ethical commitments from AI companies and commercial or strategic incentives tied to large government contracts. It also signals potential long-term reputational and commercial consequences for OpenAI as rivals like Anthropic capitalise on perceived principled stands.
Why should I read this?
Because it’s drama with consequences: big money, broken promises and real-world risks. If you care about AI ethics, where models get deployed, or who wins in the enterprise LLM market, this one’s worth five minutes of your time.
Author style
Punchy: the piece pulls no punches — it frames the deal as a startling reversal and argues the short-term cash may cost OpenAI long-term trust and market position. If this affects your strategy or risk view on providers, read the detail.
Source
Source: https://go.theregister.com/feed/www.theregister.com/2026/03/06/openai_dod_deal/
