On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he’d found a unique way to keep those same limits in OpenAI’s contract.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added, using the Trump Administration’s preferred name for the Defense Department, the Department of War.
Across social media and the AI industry, people immediately began to challenge Altman’s claim. Why, they asked, would the Pentagon suddenly agree to the red lines that it had said — in no uncertain terms — that it would never do so?
The answer, sources told The Verge, is that the Pentagon didn’t budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.
One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.
OpenAI’s former head of policy research, Miles Brundage, said on X that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”
In a statement to The Verge, OpenAI spokesperson Kate Waters said the Pentagon had not asked for mass surveillance powers and denied that the agreement allowed for the crossing of certain lines. “The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way,” Waters said.
AI systems could help the military (or other departments) conduct widespread surveillance operations with unprecedented levels of detail. AI’s best talent is finding patterns, and human behavior is nothing if not a set of patterns — imagine an AI system layering, for any one individual, geolocation data, web browsing information, personal financial data, CCTV footage, voter registration records, and more — some publicly available, some purchased from data brokers. “Using these systems for mass domestic surveillance is incompatible with democratic values,” Amodei wrote in a statement. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”
While Anthropic says it pushed for a contract that specifically proscribes the practice, OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”
But this isn’t reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones). In 2013, National Security Agency intelligence contractor Edward Snowden revealed the extent of some of these programs, such as reportedly collecting telephone records of Verizon customers on an “ongoing, daily” basis, and gathering bulk data on individuals from tech companies like Microsoft, Google, and Apple via a secretive program called PRISM. Despite promises of reform from intelligence agencies and attempts at legal changes, few significant limits to these powers were enacted. Mike Masnick, founder of Techdirt, said online that OpenAI’s deal “absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.”
“The intelligence law section of this is very persuasive if you don’t realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities,” Palisade Research’s Dave Kasten wrote of OpenAI’s agreement.
The Pentagon “has not asked us to support that type of collection or analysis, and our agreement does not permit it,” Waters said. “Our agreement does not permit uses of our models for unconstrained monitoring of U.S. persons’ private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.”
Anthropic’s Amodei has publicly said that the law had not yet caught up with AI’s ability to conduct surveillance on a massive scale. And Altman takes pains in his statement to say that OpenAI’s contract “reflects [its red lines] in law and policy,” meaning that it’s simply abiding by existing laws and existing Pentagon policies, the latter of which can change at any time. (OpenAI attempts to address the latter issue in a Q&A, where it says the contract “explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.”)
Sarah Shoker, a senior research scholar at the University of California Berkeley and former lead of OpenAI’s geopolitics team, told The Verge that “I think there are a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave.” Shoker added that the vagueness of the language doesn’t make it clear what exactly is prohibited here. “The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition. That is language that’s designed to allow optionality for the leadership … It allows leaders also not to lie to their employees in the event that the Pentagon does use the LLM in a legal manner without OpenAI leadership’s knowledge.”
Based on what we’ve seen of OpenAI’s existing contract and according to the Pentagon’s current legal constraints, it could legally use OpenAI’s technology to search foreign intelligence databases for information on Americans on a large scale. The Pentagon could also buy bulk location data from data brokers and use OpenAI’s tech to map out Americans’ typical patterns, or to quickly and seamlessly build profiles of many American citizens from publicly available data, including surveillance footage, social media posts, online news, voter registration records, and more, potentially layered onto other data it had purchased already.
OpenAI’s “red line” on lethal autonomous weapons is similarly weak. The company’s contract with the Pentagon, which the company released excerpts from on Saturday, states that OpenAI’s technology “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” That would put it in compliance with a 2023 Department of Defense directive. There appear to be no additional contractually-obligated bans or restrictions — which is ostensibly why it was able to sign an agreement with the Pentagon. Anthropic, meanwhile, sought a ban for unsupervised lethal autonomous weapons, at least until it deemed the technology ready.
The source said that the majority of OpenAI’s agreement was nothing new, and it wasn’t anything that other AI companies involved in Pentagon deals hadn’t seen before, whether due to elements floated in negotiations or things that AI companies involved with the Pentagon had already been doing.
After a Trump administration official confirmed that OpenAI’s agreement “flows from the touchstone of ‘all lawful use,’” Altman cited other parts of the agreement to make the case that OpenAI was maintaining its red lines. He said some OpenAI employees would receive security clearances to check in on the systems, for example, and that OpenAI would introduce classifiers (or small models that can monitor and tag large models, potentially blocking them from performing certain actions). In OpenAI’s blog post about the agreement, the company writes that its deployment architecture “will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.”
But that’s not necessarily true, the source said. The source said AI companies involved with the Pentagon already use these safeguards, and their impact is limited. Classifiers, for instance, wouldn’t be able to confirm whether a human reviewed an AI system’s decision to attack a target before the kill strike, the source said. Nor, the source added, could it tell if a query to summarize an American’s social media posts is a one-off request or part of a mass surveillance program. And if the government determines an action is legal, then OpenAI’s classifiers wouldn’t be allowed to prohibit the technology from carrying it out, the source said.
Altman said OpenAI’s deal includes “human responsibility for the use of force, including for autonomous weapon systems.” That’s different from Anthropic’s demand: not deploying these systems “without proper [human] oversight.” Though it’s tough when the specific contracts’ definitions of these terms aren’t explicitly available, human responsibility could easily denote someone being responsible for these systems’ decisions after the fact, while Anthropic’s request for oversight would have required humans in the loop before and/or during an AI system’s decisions to kill targets.
As with mass surveillance, OpenAI argues technical safeguards would help maintain its red line for killer robots. The company wrote that it was “not providing the DoW with ‘guardrails off’ or non-safety trained models,” and its technology would be deployed only in the cloud, not on edge devices (or devices that process data locally, such as a military drone) — where it said “there could be a possibility of usage for autonomous lethal weapons.”
But the source said that deploying OpenAI’s technology only in the cloud means little for either of OpenAI’s stated limits. Mass domestic surveillance, the source said, requires such a large volume of data that it’s virtually impossible not to carry it out using the cloud. And even if most kill decisions are carried out on a local machine, most of the decisions leading up to that — the “autonomous kill chain” — involve running powerful algorithms in the cloud first, the source said. Even if OpenAI’s tech isn’t directly involved in pulling the trigger, it could very well be powering everything leading up to that point, with no guarantee a human oversees the final step.
And, again: OpenAI’s agreement says it will allow anything the US government determines is legal. Even its assurances that it will only follow current laws and policies, not ones that are changed or reissued, may not offer meaningful safeguards. In the past, agencies have reinterpreted existing laws in ways that effectively allow them new powers. And the Trump administration has claimed laws like the International Emergency Economic Powers Act justify unprecedented presidential powers like imposing global tariffs. These powers have, in fact, sometimes been declared illegal — but only after months of legal battles, during which OpenAI would have to either follow the administration’s orders or make an independent judgment call about the law. Altman has publicly stated that, unlike Anthropic, OpenAI is “generally quite comfortable with the laws of the US.”
Defense Secretary Pete Hegseth and President Trump, in a barrage of social media posts, crowed that they would never allow a private tech company to influence how the US military utilized technology for war. “The Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives,” Hegseth wrote, and “America’s warfighters will never be held hostage by the ideological whims of Big Tech.”
Even Jeremy Lewin, an undersecretary in the Trump administration, said that the Pentagon’s deal with OpenAI (and another agreement with xAI) was a “compromise that Anthropic was offered, and rejected” — meaning that the terms did not align with Anthropic’s own red lines. Lewin said the deals included certain mutually agreed-upon safety mechanisms, plausibly the technical safeguards Altman mentioned.
In Altman’s Friday announcement, he said OpenAI had asked the Pentagon to “offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.” It seemed to be a dig at Anthropic, since the OpenAI rival had not accepted such an agreement so far and had, according to Lewin, already been offered the same deal and refused it.
Refusing that “compromise” has had major consequences for Anthropic. On Friday, after negotiations broke down between it and the Pentagon, the latter announced Anthropic would be labeled a supply-chain risk, a classification usually reserved for foreign companies with cybersecurity concerns and virtually never made public or applied to an American company. Anthropic said it was willing to challenge the designation in court. Trump ordered federal agencies to drop Anthropic’s AI, and it wasn’t immediately clear to what extent the Pentagon would potentially blacklist companies that use Claude for services unrelated to national security.
Tech workers across the industry have supported Anthropic for its decision to stand firm and wondered why their own companies weren’t aligning with Anthropic’s own red lines and standing together. The company’s decision has been lauded online, and on Saturday it surpassed ChatGPT to become the most-downloaded app on Apple’s App Store. Public figures, celebrities, and AI leaders expressing their support — including pop star Katy Perry signing up for a Claude Pro subscription.
It’s worth repeating, however, that despite Amodei’s being largely painted as a hero here, he is not at all against lethal autonomous weapons sometime in the future — it’s something Anthropic has made clear that it’s fully ready to support. In his public statements, Amodei has even offered to partner with the DoD on “R&D to improve the reliability of these systems” so that the military’s use of lethal autonomous weapons, under Anthropic’s terms, could be sped up. All Amodei has said is that the technology is not reliable enough “today” to kill human targets unsupervised.
“Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense,” Amodei said. “But today, frontier AI systems are simply not reliable enough to power [them].”


