Sam Altman is asking the public, his employees, and the government to take him at his word: OpenAI’s new Pentagon deal, he insists, holds firm on the ethical principles his company shares with the freshly blacklisted Anthropic. Whether that word can be trusted under sustained government pressure is the central question the AI industry now faces.
Anthropic’s blacklisting came after a prolonged effort to negotiate government AI contracts that excluded two specific uses — autonomous weapons and mass surveillance. Pentagon officials viewed these exclusions as unacceptable restrictions and pushed for their removal. When Anthropic refused, President Trump issued a sweeping ban on all federal use of the company’s products.
The ban, delivered via Truth Social, accused Anthropic of attempting to override constitutional principles with corporate ethics policies — a framing designed to cast ethical AI governance as politically suspect. The speed and severity of the response was intended as a warning to the entire industry.
Altman responded with a deal and a reassurance. He stated publicly and privately that OpenAI would not allow its technology to be used for mass surveillance or autonomous killing, and that these commitments are written into the Pentagon contract. He also called for the government to standardize these terms, a move that implicitly aligned OpenAI with Anthropic’s position even as the company took the contract Anthropic lost.
Hundreds of AI workers at OpenAI and Google had already signed a letter calling out the government’s divisive tactics before Altman’s announcement. Their message — that the industry should stand together — now hangs over OpenAI’s deal as a test of the company’s actual commitments in the months and years ahead.

