OpenAI pushes to add surveillance safeguards following Pentagon deal


OpenAI is negotiating additional safeguards with the US defence department intended to prevent mass surveillance of American citizens using its AI, as it implements the deal hastily announced on Friday.

Sam Altman’s AI start-up has already changed the contractual wording around surveillance and is looking to add protections during the three-month implementation period for the agreement.

Legal experts and staff have scrutinised language in the contract that prohibits “intentional”, “deliberate” or “targeted” surveillance, people familiar with the conversations said.

They have raised concerns that the government could surveil Americans “incidentally” or “unintentionally” using modern AI tools, the people added.

“What is yet to be worked out is the implementation of [these contracts],” said a person close to OpenAI.

The next phase will cover questions “beyond the language of the contracts”, including where the technology will be deployed and technical safeguards that govern when AI models might refuse to follow instructions.

“The challenge for OpenAI is how to make a product that is still usable but doesn’t do unsafe things,” the person added.

The effort to add protections during the execution of the Pentagon deal comes as OpenAI has repeatedly sought to clarify the terms of its contract and assuage concerns, including those from its staff, about the potential abuse of the $730bn start-up’s powerful AI.

OpenAI’s approach runs counter to its rival Anthropic, which has refused to accept contract terms because of concerns about surveillance.

Altman has admitted the rush to strike a deal after Anthropic’s talks spectacularly collapsed on Friday “looked opportunistic and sloppy”.

Anthropic chief executive Dario Amodei attacked OpenAI’s “mendacious” messaging around its original contract in a note to staff, first reported by The Information on Wednesday.

He accused Altman of “gaslighting” his company by “trying to undermine our position while appearing to support it”, according to the memo sent to staff on Friday.

Altman announced updates to the ChatGPT maker’s contract on Monday, which “prohibit deliberate tracking, surveillance or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information”.

Intelligence services such as the National Security Agency, whose collection of bulk metadata from the phones of ordinary Americans was exposed by Edward Snowden in 2013, would also be excluded from the deal, he added.

Connie LaRossa, OpenAI’s US national security policy lead, on Wednesday said the terms of safeguards to protect against surveillance “are still being negotiated”.

OpenAI said its agreement with the Pentagon had been signed and that “we believe the new updates from Monday were important. We will be working with the department closely on this implementation phase.”

The Pentagon did not respond to a request for comment.

Defence secretary Pete Hegseth has insisted that AI companies make their technology available for “all lawful purposes”.

In talks with the Pentagon, Amodei pushed for guarantees that its AI could not be used for domestic mass surveillance or in lethal autonomous weapons.

The Anthropic CEO wrote in his memo that under current Pentagon policy that was set during Joe Biden’s administration “a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about.”

Amodei also insisted on a clause prohibiting agencies from gathering large public data sets and using Anthropic’s tools to analyse them, according to a person with knowledge of the talks. He argued that doing so was legal but could amount to mass domestic surveillance.

OpenAI has argued it could maintain the same redlines on surveillance and autonomous weapons through technical measures, such as its own model safeguards, and by ensuring that OpenAI employees remained “in the loop” and worked with officials.

Amodei dismissed those protections. The approaches [OpenAI] is taking mostly do not work: the main reason [OpenAI] accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” he wrote.

Legal experts say there is a lack of clarity and confidence in the current surveillance law, leaving AI labs in a difficult position.

“Because of the lack of clarity of what legal and policy framework exists, companies assumed that there’s no policy, there’s nothing, there’s no framework,” said one former senior defence official.

Civil liberties experts have argued the current frameworks are insufficient, as the law is lagging behind technological change.

Mieke Eoyang, former deputy assistant secretary of defence for cyber policy and a visiting professor at Carnegie Mellon University, said there were also questions about “whether or not, in this administration, they are giving recognition to that level of already built-in protection in the system”.

Two former US government officials said the White House had not publicly committed to existing legal frameworks to prevent the use of AI violating civil liberties.

One former senior defence official pointed to the fact that the administration had not said whether it has maintained or rescinded the AI National Security Memorandum policy, which established rules to prevent AI from violating civil liberties or human rights.

Paul Nakasone, a former NSA director and former head of US Cyber Command, who now sits on OpenAI’s board, at an event on Monday said: “Our DNA as a people [is] just always looking at government surveillance as being bad.”

“We have to have that trust in terms of the National Security Agency, our intelligence community, being able to do these types of missions with the confidence that what we are doing is by the letter of the law,” he added.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top