Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Satellite imagery that appears to have been either generated or modified with AI was shared widely on social media at the weekend, highlighting the dangers of the technology as a potential vehicle for misinformation in wartime.
An image circulated on X — including in a post from the official account of the Iranian newspaper the Tehran Times — shows a satellite image claiming to depict damage to an American radar system in Qatar following an Iranian drone strike.
FT analysis reveals it to be an AI-altered image of an area in Bahrain. Videos verified by the FT show several strikes near the radar system, and satellite imagery captured by Planet Labs on March 1 confirms that the site has been damaged.
But the image of the destroyed radar features signs of manipulation: vehicles that appeared in the “before” photo taken more than a year ago remain in the same position, the shadows fall at exactly the same angle as in the earlier image, and parts of the building’s structure have been altered. Historical satellite imagery shows no structural changes to the site in many years.
At the time of publication, the Tehran Times’ post alone had almost 1mn views, was shared thousands of times and remained online more than 48 hours after it was posted. The FT found instances of the image being shared on other social media platforms and websites.
The episode reflects a broader shift in information warfare, as generative AI makes it easier to fabricate convincing visual evidence and harder for audiences to distinguish between documentation and deception.
In response to a surge of fake videos circulating on the platform, X’s head of product Nikita Bier posted on Tuesday that the company would step up efforts to curb AI-generated content. Users found sharing material without proper disclosure would be barred from earning revenue for 90 days and repeat offenders would face permanent suspension, Bier said.
Brady Africk, an independent open-source intelligence researcher and director of media relations at the American Enterprise Institute, said people tended to trust satellite imagery as a source of truth because of the complex technology involved in capturing it.
“Satellite imagery can be manipulated just like other images. AI has made that all tremendously easier and [it] poses a significant threat to people trying to get information online,” Africk said. He added that he was worried the rapid improvement of these models would only make fakes harder to spot in future conflicts.
The difficulty in identifying a manipulated satellite image, compared to a deepfake of a person, is the “lack of biometric tells”, according to Henk van Ess, an expert in online research methods and author of the Digital Digging newsletter.
“With a face, you can look for weird blinking, unnatural skin texture, misshapen ears,” he said. “With a satellite image, you’re looking at buildings, roads, terrain — things that don’t have these inherent cues. And most people have no idea what a genuine satellite image is supposed to look like from a specific sensor at a specific resolution,” he said.
“The key shift is this: it used to take a state intelligence agency with Photoshop skills to fake a satellite image. Now anyone with access to freely available AI tools can produce something convincing enough to fool casual viewers and move markets. The barrier has collapsed,” van Ess added.
Another widely distributed AI-altered image appears to be an upscaled, colourised version of a black and white image captured by Airbus, the aerospace company. It is not clear from where the doctored image originated. The image was watermarked with the name of MizarVision, a Chinese space start-up that has been publishing satellite photos of the US military build-up in the Middle East in recent weeks, but the image is not listed on the company’s own social media pages.
MizarVision’s official page currently carries a warning about accounts on social media that claim to distribute its imagery. MizarVision and the Tehran Times did not respond to requests for comment.
Using AI to add colour to satellite images can subtly change how people view the scene, introducing perceived differences that might not actually be there, according to Bo Zhao, a professor in digital geographies at the University of Washington.
“Black and white imagery doesn’t contain details and perspective of things or differences on the ground,” Zhao explained. “With colour, it becomes easier to differentiate.”
Other AI-generated media has also been widely shared, including an image highlighting the alleged damage to a US base near Erbil, Iraq. The AI-enhanced image shows a raging fire and a huge cloud of smoke and again contains structural inconsistencies when compared to recent images taken at that location.

A post claiming to show the body of Iran’s supreme leader Ayatollah Ali Khamenei being pulled out from under rubble has been shared but remains unverified and also appears to be AI-generated.
The conflict in the Middle East is not the first war to be affected by AI-driven disinformation. There were reports of fake satellite imagery following the four-day India-Pakistan conflict last year and similar claims have been made during the Ukraine-Russia war.
“I think it’s largely an education issue and an awareness issue in terms of making sure as many people as possible are aware of the ways that digital media can be manipulated,” Africk said. “People should be very adamant on finding trusted sources who work in the public eye and do so responsibly.”
Additional reporting by Chris Cook


