Curbing sexual deepfakes is a moral imperative


Unlock the Editor’s Digest for free

Elon Musk and his frequent transatlantic sparring partner — the UK — are trading blows once again. Britain’s regulator Ofcom is investigating Musk’s X over concerns that its AI tool Grok is being used to generate sexualised images of real women and children; the government is bringing into effect a law to criminalise creation of non-consensual intimate images. Musk has said the UK’s “fascist” government wants “any excuse for censorship”; a White House official added that Britain was “contemplating a Russia-style X ban”. Yet curbing child pornography or protecting privacy has nothing to do with censorship or restricting free speech.

Sir Keir Starmer’s UK government has arguably stuck its head higher above the parapet, but it is far from alone in taking action over alarming reports this month about Grok-generated images. Malaysia and Indonesia have blocked Grok, and countries as far apart as Australia, India and Brazil have ordered probes or reviews. French authorities have broadened an existing investigation into X after ministers raised alarms. Germany’s media minister wants EU action; European Commission president Ursula von der Leyen warned X that “if they don’t act, we will”.

Along with EU and Australian legislation, Britain’s Online Safety Act is considered one of the most stringent pieces of digital regulation. If Ofcom finds X failed in its duties under the act — including to remove illegal intimate imagery and child abuse material, and to properly assess Grok’s risks — it could demand a fine of up to 10 per cent of X’s global turnover. It could also seek a court order to block UK access to X. Musk last week responded to criticisms by limiting use of Grok’s image generator to paid subscribers. Downing Street said this insulted victims of misogyny and sexual violence and simply turned the tool into a “premium service”.

Musk says X is being unfairly targeted since other AI systems can be used to produce similar material. But online safety experts say rival products from, say, OpenAI and Google have tougher content “guardrails” than Grok — and since some Grok features have been incorporated into X, images appear publicly and can spread widely across the social network. Even where there are guardrails, though, determined users can still sometimes dodge them by using complex prompts to trick the AI, and specific “nudification” tools and apps are available (the UK plans to ban these in a forthcoming bill).

The X owner and the Trump administration charge that the UK and the EU have curtailed free speech and political freedoms. They say Europe is using regulation on tax, competition and content to throttle America’s vibrant tech industry. Certain hate speech rules in Britain and elsewhere have indeed become excessive, or their implementation overzealous. Some European tech regulation has similarly been heavy handed; the EU is rightly scaling back parts of its 2024 AI Act.

Yet, just as some forms of behaviour are restricted because they are harmful to others, so are some forms of speech and content. Necessary controls on child sexual abuse material, for example, predated the digital revolution.

Polling also suggests rising public disquiet over AI-generated sexual deepfakes and nudified images. The danger of a mounting “techlash” is real. As the EU found, attempting to devise all-encompassing regulation when AI is still evolving risks stifling innovation. But if the public is to retain confidence in the world-changing technology then potential harms must be dealt with swiftly as they emerge. Showing they can control non-consensual sexualised imagery is a test for politicians and — since trust remains vital to Silicon Valley business models — for the tech titans, too.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top