Hello and welcome to Regulator, the newsletter for Verge subscribers that goes inside Washington’s increasingly existential clashes between tech and politics. If this was forwarded to you, can I interest you in a full-fledged subscription to The Verge for only $40 a year? You’ll get so much more than doomer scenarios. We cover non-existential fun stuff like Legos, too.
Do you work somewhere involving government, technology, and existential threats? Send all tips to tina.nguyen+tips@theverge.com, or to my Signal account @tina.nguyen19.
This was, to put it mildly, not a chill weekend.
For a few hours on Saturday, I thought that the Anthropic-Pentagon contract dispute, which seemed to have concluded on Friday night when Defense Secretary Pete Hegseth declared that the company was a supply-chain risk, would take a backseat in the news cycle. You know, because right around 1AM Saturday morning, the US launched 100 military fighter jets and directed them toward Iran. I’d been texting sources late into the night about OpenAI’s new contract with the Pentagon, asking whether Sam Altman did get those red lines on mass surveillance and autonomous lethal weapons, but by the time I woke up, the United States had assassinated Ayatollah Ali Khamenei and several other Iranian leaders in an aerial strike on Tehran, openly and unapologetically in broad daylight.
Soon it became apparent, though, that Anthropic was part of the story, too. On Sunday, The Wall Street Journal reported that Claude-powered intelligence tools had been used by several military command centers during the strike, citing sources familiar. It’s unknown how the Pentagon used Claude in this specific operation in Iran, and such information would be classified and only known by people directly involved. But the Journal wrote that the Pentagon had already deeply embedded Claude, the only AI system that had the security clearance to handle classified information up until last week, into technology that performed “intelligence assessments, target identification and simulating battle scenarios” — technology that was, apparently, used in the Iran strike.
A few observations can be pulled from this: First, the entire conflict was never about Anthropic posing an actual national security risk (but the public could already kind of see that). But second, while AI may not have yet reached the “fully autonomous lethal weapon” stage, it’s developed to a level sophisticated enough to conduct an impressively precise (though uncomfortably extralegal) strike on a foreign leader. It is all the more impressive considering that Iran was under a near-total, government-imposted internet blackout for several months, with virtually no digital connection to the outside world.
I hit up Hamza Chaudhry, the AI and National Security lead at the nonpartisan Future of Life Institute, for his long view on Operation Epic Fury. He noted that both sides of the conflict were already using artificial intelligence in their warfare — Iran has deployed AI-assisted missiles in recent months — and while the US had clearly prevailed in this scenario, it was the prelude to what he described as a “dyadic automated warfare problem: two AI systems effectively talking to each other through the medium of kinetic action, each optimizing and responding faster than human decision-makers can follow.”
Chaudry’s nightmare scenario, however, suggested the end of nuclear deterrence as a tool for global stability:
“Recent analyses of the 2025 India-Pakistan and Iran-Israel conflicts found that AI renders second-strike forces more transparent and thus more vulnerable, and that while nuclear arsenals still impose a ceiling on all-out war, AI lowers the floor for sub-threshold aggression and compresses political reaction time. If an adversary believes its nuclear deterrent is becoming visible (e.g., submarines trackable, mobile launchers locatable, command infrastructure mappable) the rational response is to expand the arsenal or shift to a launch-on-warning posture.
“Experts have described this as threatening ‘arms race stability’: the risk that one side might seek a breakout advantage in advanced technology, triggering complementary efforts by the other. This is not a hypothetical future problem. The technologies that made Operation Epic Fury possible are the same technologies that are slowly making nuclear deterrence more fragile. We have no international governance framework that addresses this adequately.”
So what exactly is in the magical, red line-respecting contract that Altman was bragging about? Currently, we don’t know, other than what OpenAI wrote about the contract on its company blog. Though it was essentially a press release, the post did contain excerpts from what the company claims is the contract itself, which stated that “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities,” citing several preexisting security laws. But even that didn’t pass the legal sniff test. As my colleague Hayden Field reported yesterday:
OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”
But this isn’t reassuring. In the years after 9/11, US intelligence agencies ramped up a surveillance system that they determined fell within the legal limits OpenAI cites, including multiple mass domestic spying operations (along with apparently highly invasive international ones.)
Here’s one more piece of imaginary legalese, pointed out by a reader: “unconstrained monitoring” isn’t even a real legal term, much less one cited in the relevant authoritative laws that OpenAI is pointing to.
Did Hegseth jump the gun?
At first glance, President Donald Trump’s 3:47PM Friday post on Truth Social seemed like a final decision on Anthropic. But a careful reading indicates that Trump may have actually been open to negotiations. Nowhere in his post does he threaten to punish other companies for being Anthropic customers, and this one sentence below contains Trump’s only real legal threat to Anthropic. The crucial operative word is in bold:
“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”
White House watchers immediately saw this signal as a loud de-escalation tactic — an action could be taken, but hadn’t been yet — meant to buy several months of time for the Pentagon and Anthropic. This tactic only makes sense in the context of how Trump’s used social media during his time as president as both a carrot and stick. He’ll publicly post an aggressive threat online, like a declaration of a new tariff, an investigation into a company, or a nuclear holocaust on North Korea. Then an agreement is struck behind the scenes, and within weeks, Trump is extolling the virtues of his foes on Truth Social — even conceding to them, sometimes — and he experiences absolutely no political backlash, because Trump just does this all the time.
That understanding only lasted about an hour and a half before Hegseth posted his decision to officially designate Anthropic a supply-chain risk, threatening to punish defense contractors that did “any commercial business” with the company, and declaring that his decision was “final.” Generally speaking, the Defense Secretary has the power to unilaterally make this decision, and they do not even have to announce it to the public or get sign-off from the President. But with the combination of those words, Hegseth threw the entire tech industry into a spiral.
As of today, no one I’ve talked to — industry, policy, or otherwise — has any idea what “any commercial activity” actually means, or exactly what sort of punishment they’d incur if they continued to contract with Anthropic for non-defense purposes. Anthropic, meanwhile, stated on Friday that the laws on supply-chain risks only applied to Claude’s use at the Defense Department and did not extend outside those boundaries.
If anyone has any idea exactly how “any commercial activity” with Anthropic could be reasonably (and legally, if possible) restricted vis-à-vis defense contractors, please send me your contact information.
Several people reiterated to me last week that setting egos and the entire “illegal punishment” thing aside, there was a high-level intellectual argument for the Pentagon’s position: A private company should not be able to dictate what the United States government, an entity that was chosen by the American people, does with its technology. But I for one cannot believe that the person who made this argument the most compellingly — on social media, at least — was Jeremy Lewin, an Under Secretary at the State Department, which is an entirely different government entity from the Department of Defense.
Meanwhile, Emil Michael, the Uber corporate cautionary tale-turned-Pentagon CTO leading negotiations with both Anthropic and OpenAI, has mostly been posting nonstop ad hominem attacks on X calling Dario Amodei a “liar [with] a God-complex,” among other things, often at midnight. (Fun fact: Michael has written more X posts about Anthropic in the past few days than he has about the Iran strike.)
Last week, before any of this insanity happened, I attended a sold-out taping of The Hopkins Forum’s Open to Debate series on the topic: “Will AI Make Work Obsolete?” I was mostly there for the guest panelists — in what world do you ever see Andrew Yang going up against Facebook cofounder Chris Hughes? — but the debate itself was pretty compelling.
The doomer position was argued by Yang and MIT professor and Nobel-winning economist Simon Johnson: AI is about to cause widespread job loss, and there’s no mechanism in place to prevent mass societal upheaval. The optimist position was argued by Hughes and Rumman Chowdhury, a data and social scientist and cofounder of the nonprofit Humane Intelligence: There is a future where AI can augment human work and improve human life. Both sides heavily agreed that unrestrained corporate greed was probably going to steer AI into the doomer scenario, but it was refreshing to hear someone make an optimistic case for AI.
The episode will go live on Friday on Open to Debate’s Substack, but in the meantime, I did learn that if you say “David Sacks” in a room full of tech people in Washington, DC, someone will immediately come up to you and start complaining about him, unprompted.



