Microsoft backs Anthropic in legal fight with the Pentagon


Unlock the White House Watch newsletter for free

Microsoft has backed Anthropic’s lawsuit against the Pentagon, warning that the “drastic” and “unprecedented” moves against the AI start-up would have “broad negative ramifications” for the US tech industry.

The software giant in a filing on Tuesday called for a temporary restraining order blocking enforcement of the defence department’s decision to brand Anthropic a supply chain risk while the court considers the start-up’s legal challenge.

Microsoft is the first Big Tech company to take sides in Anthropic’s feud with the Pentagon over the terms governing military use of its AI models.

The clash has split Silicon Valley, which has until recently carefully avoided openly challenging the Trump administration since the president’s return to office.

The software giant argued a restraining order would allow time for a negotiated settlement and “reasoned discussion” about the use of AI in military and intelligence operations.

Negotiations between Anthropic and the Pentagon collapsed late last month after the $380bn start-up rejected a contract for military deployment of its technology. Chief executive Dario Amodei insisted on “red lines” prohibiting its use for lethal autonomous weapons and mass surveillance of US citizens.

Defence secretary Pete Hegseth has since moved to cut Anthropic from the Pentagon supply chain, a measure normally reserved for companies from China or Russia.

The administration has also demanded that all federal agencies stop using its Claude chatbot as part of a campaign against what it calls “woke” AI.

Microsoft said its “position is that AI should be focused on lawful and appropriately guarded use cases”.

AI, it said, “should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war”.

The Seattle-based company, which has large military contracts, added that rapidly cutting off Anthropic could “hamper US warfighters at a critical point in time”.

Claude is currently the only AI tool used in classified military settings, although OpenAI recently signed a deal with the Pentagon.

Microsoft is not a defendant in the case and submitted the amicus brief to courts in California and the District of Columbia.

On Monday, a group of more than 30 researchers at Google and OpenAI, including DeepMind’s chief scientist Jeff Dean, threw their personal support behind Anthropic in a similar letter.

The White House and the Pentagon did not immediately respond to requests for comment.

Despite owning 27 per cent of arch-rival OpenAI, Microsoft has fostered close ties with the Claude chatbot maker and signed a $30bn cloud-computing deal with it in November.

This week it said it is integrating Anthropic’s popular coding models into its business software, which is used across the US government.

Google, Amazon and Microsoft have previously said their lawyers had determined they could continue to use Anthropic for non-military work.

Microsoft said the supply chain risk designation “forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a US company”.

“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” it added.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top