
Google and OpenAI staff support lawsuit
Another brief supporting Anthropic was filed by various technical, engineering, and research employees of Google and OpenAI. Google is an investor in Anthropic. The Google and OpenAI employees wrote that “mass domestic surveillance powered by AI poses profound risks to democratic governance—even in responsible hands.” On the topic of autonomous weapon systems, they wrote that “current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails.”
The Google and OpenAI employees said that in using the supply chain risk designation “in response to Anthropic’s contract negotiations, [the Pentagon] introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology’s deployment.”
Anthropic CEO Dario Amodei explained the company’s objections to certain AI uses in a February 26 post. “We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” he wrote.
Current law allows the government to “purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” and “AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale,” Amodei wrote.
CEO: Autonomous weapons too risky
Amodei expressed support for partially autonomous weapons like those used in Ukraine, but not for fully autonomous weapon systems “that take humans out of the loop entirely and automate selecting and engaging targets.” He said that fully autonomous weapons “may prove critical for our national defense” eventually but that AI is not yet reliable enough to power them.


