Microsoft has put a compelling national security case for Anthropic before federal judges in San Francisco, arguing in a court brief that the Pentagon’s supply-chain risk designation against the AI company actually undermines the defense capabilities it claims to protect. The brief called for a temporary restraining order and was joined by a separate filing from Amazon, Google, Apple, and OpenAI. The case is rapidly becoming a landmark confrontation over the governance of AI in national security contexts.
Anthropic’s dispute with the Pentagon began when the company refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, triggering the cancellation of Anthropic’s government contracts. Anthropic filed two simultaneous lawsuits challenging the designation in California and Washington DC, arguing it was unconstitutional and without precedent.
Microsoft’s brief is informed by its direct use of Anthropic’s AI in federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. The company also holds additional agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft publicly argued that the government and the technology sector must work together to ensure that AI serves national security without crossing ethical lines related to surveillance or autonomous warfare.
Anthropic’s court filings argued that the supply-chain risk designation was an act of unconstitutional ideological punishment for its publicly stated AI safety positions. The company revealed that it does not currently believe Claude is safe or reliable enough for lethal autonomous decision-making, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly stated that there was no chance of renegotiation.
Congressional Democrats have separately written to the Pentagon demanding information about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school, including questions about AI targeting tools and human oversight. Their formal inquiries are adding legislative pressure to an already intense legal confrontation. Together, these parallel developments are creating an extraordinary moment of accountability for the use of artificial intelligence in American military operations.
Home Technology Microsoft Puts National Security Case for Anthropic Before Federal Judges as Pentagon...

