What Will Actually Bring an End to Grok’s Deepfakes?


A few weeks ago, I rang in 2026 as the target of an online harassment campaign, after my story about Brigitte Bardot’s long history of racism and Islamophobia went viral with all the wrong people. Unfortunately, this was far from my first rodeo. Over the course of my decade-long career in digital media, I’ve grown accustomed to seeing my DM requests fill with vile fatphobia, anti-Semitism, and garden-variety misogyny when I use my platform to express more or less any progressive opinion.

But there was a new dimension to the online hate this time. A few days after the pile-on started, I experienced the deeply troubling phenomenon of being the subject of sexually explicit Grok deepfakes. Jumping on a toxic trend that emerged late last year on X, people who disagreed with my Bardot piece used Elon Musk’s controversial AI tool to create images of me in bikinis.

At first, I tried not to let it get to me. As I joked at an open mic a few days later, “It’s obviously not the best to be digitally undressed, but I also… don’t love trying on bathing suits, so it saved me a trip to a plus-size swimwear store called Qurves with a Q in Burbank.” But what was happening was difficult to get over.

The truth is, it could have been worse. Many of the women being targeted most heavily by Grok deepfakes are OnlyFans creators and other sex workers, whose tormentors see little difference between paying for an image that someone has deliberately uploaded of themselves and using AI to generate one. And then there are Grok’s most stomach-turning applications: to create deepfakes of Renée Nicole Good, the Minneapolis mother of three who was recently killed by an ICE officer, for instance, or to undress children, which makes me so nauseous I can barely even think about it.

Ashley St. Clair, the mother of one of Musk’s children, recently alleged that Grok had been used to manipulate photos of her as a minor. “The worst for me was seeing myself undressed, bent over, and then my toddler’s backpack in the background,” she shared on CBS Mornings. When she then asked the tool to remove the offending images, “Grok said, ‘I confirm that you don’t consent. I will no longer produce these images.’ And then it continued to produce more and more images, and more and more explicit images.”

While it’s sadly nothing new for sexually explicit images to be disseminated online without the subject’s consent—revenge porn has existed in one form or another for decades—the Grok situation represents “the first time there’s a combining of the deepfake technology (Grok) with an immediate publishing platform (X),” victims’ rights attorney Carrie Goldberg tells Vogue. “The frictionless publishing capability enables the deepfakes to spread at scale.” And while the outcry against Grok came swiftly, eventually leading X to limit the tool’s photo-editing capabilities, for many users the damage was already done.

That isn’t to say, however, that Grok’s targets have no recourse. Advocacy groups such as the Rape, Abuse and Incest National Network (RAINN) have made it clear that a platform’s ability to generate sexually explicit material has legal ramifications. “AI companies are not acting in the role of a content publisher. They are creating it,” Goldberg says. “So victims who are harmed because of AI-generated nudes have recourse directly against the AI company. Additionally, companies like the App Store and Google Play that act as a distributor of deepfake technology may be on the hook if they are sued in their capacity as distributors of products that are not reasonably safe.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top