Skip to main content
Advertisement
Advertisement

Commentary

Commentary: There’s one easy solution to the AI porn problem

The US Congress should hold immediate hearings about the Grok debacle and work on a legal safe harbour for responsibly testing AI models for child sexual abuse material, says a tech policy researcher for The New York Times.

Commentary: There’s one easy solution to the AI porn problem

A 3D-printed miniature model of Elon Musk and Grok logo are seen in this illustration taken, February 16, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

14 Jan 2026 05:58AM (Updated: 23 Jan 2026 03:03PM)

CALIFORNIA: On Christmas Eve, Elon Musk announced that Grok, the artificial intelligence (AI) chatbot offered by his company xAI, would now include an image and a video editing feature. Unfortunately, numerous X users have since asked Grok to edit photos of real women and even children by stripping them down to bikinis (or worse) - and Grok often complies.

The resulting torrent of sexualised imagery is now under investigation by regulators worldwide for potential violations of laws against child sexual abuse material and nonconsensual sexual imagery. Indonesia and Malaysia have chosen to temporarily block access to Grok. Even if many of the generated images do not cross a legal line, they have still incited outrage.

Though the chatbot began limiting some requests for AI-generated images to premium feature subscribers as of Thursday, Grok’s new feature remains available unchanged, in stark contrast to xAI’s swift intervention after Grok started referring to itself as “MechaHitler” last summer.

AI companies like xAI can and should do more not just to respond quickly and decisively when their models behave badly, but also to prevent them from generating such material in the first place. This means rigorously testing the models to learn how and why they can be manipulated into generating illegal sexual content - then closing those loopholes.

CNA Games
Show More
Show Less

But current laws don’t adequately protect good-intentioned testers from prosecution or correctly distinguish them from malicious users, which frightens companies from taking this kind of action.

WHY SAFEGUARDING AN AI MODEL IS HARD

As a tech policy researcher who practised internet law at a big Silicon Valley firm (where my clients included X's predecessor Twitter years before its acquisition by Musk), my colleagues and I have found that AI companies face legal risks that discourage them from doing everything they can to safeguard their models against misuse for child sexual abuse material. The ongoing Grok scandal urgently underscores the need for the US Congress to clear the way for AI developers to test their models more robustly, without fear of being caught in a legal trap.

Though non-consensual deepfakes have been a problem for years, generative AI has supercharged the phenomenon. Creating abhorrent imagery no longer requires proficiency with Photoshop or aptitude with open-source models; one need only enter the correct text prompt. While both open-source and hosted models typically have safety guardrails built in, these can be surprisingly brittle, and malicious users will find ways around them.

Tech companies have long had a complicated relationship with whether and to what extent to enable user access to legal sexual content. (Some AI models are trained on adult pornographic content.) 

Reports show xAI, which has taken steps in the last year to embrace adult content, including allowing users to chat with cartoonish sexual chatbot companions, is allowing its models to create graphic pornographic content (though it’s unclear whether any models were directly trained on such content).

But safeguarding an AI model is hard. Even if the training data is free from sexually explicit depictions of children, a model trained on both innocuous imagery of children and on adult pornography can combine those concepts to generate pornographic depictions of a child.

Prompts for adult pornography, which unlike child sexual abuse material is presumptively protected speech, pose an even harder question.

True, federal law and many state laws now outlaw non-consensual sexual imagery (real or deepfake). But not every “spicy” image meets the legal threshold, and some early requests for Grok to undress women’s photos were consensual, even if the bulk of them are not.

Non-consensual sexual imagery and child sexual abuse material can tarnish a company’s reputation and expose it to potential legal liability. 

The Take It Down Act signed by President Trump last May means tech companies will soon be required to remove non-consensual sexual images promptly upon request, and under existing law, they are not immune from federal criminal liability. The federal government, which has warned that AI-generated sexually explicit material of children is illegal, reiterated in response to the Grok debacle that it “takes AI-generated child sex abuse material extremely seriously” and will prosecute any producer or possessor of such material.

A photograph taken on Jan 13, 2025 in Toulouse shows screens displaying the logo of Grok and its founder Elon Musk. (Photo: AFP)

CANNOT AFFORD YEARS OF INACTION AGAIN

Yet federal laws are ironically making AI model safety more difficult.

“Red teaming” is the practice of simulating an adversarial actor’s behaviour to test the effectiveness of an AI model’s safeguards. Red teams, which may be either internal or external to a company, try to get a model to generate undesirable content - say, malware code or bomb-making instructions. After learning what loopholes and weak points red teams can exploit, companies can work on fixing them to prevent misuse by real-world bad actors. But child sexual abuse material is meaningfully different. Production and possession of these materials are serious crimes with no exception for research or testing purposes.

This makes red teaming for child sexual abuse material very risky legally, posing a dilemma for AI companies: Do you try your damnedest, as a malicious actor would, to make your model produce sexual imagery of children, and risk criminal prosecution? (Even for good-faith testing purposes to prevent malicious misuse, is privately creating that imagery justifiable?) Or do you steer clear of this sort of testing, and risk the regulatory and public relations fallout currently engulfing xAI?

Lawmakers have begun waking up to this predicament. 

Just two months ago, Britain enacted landmark legislation letting the AI industry collaborate with child safety organisations to ensure robust testing without fear of criminal liability. Arkansas recently passed a law against AI-generated child sex abuse material that contains an exemption for good-faith adversarial testing - but that’s no substitute for a consistent nationwide policy. In Congress, a bipartisan bill would limit liability if AI developers follow best practices for screening training data for sexually explicit imagery of children - but that addresses only some sources of legal risk.

Correctly scoping a legal safe harbour for AI-generated child sexual abuse material testing is tough. In addition to the obvious ethical issues involved, there are concerns about reckless red teamers inadvertently disseminating AI-generated imagery they created through their testing, and about allowing belligerent parties claiming to be red teaming to dodge accountability.

Plus, my discussions with lawmakers’ staff have revealed a reluctance among some politicians to be seen as giving a gift to Big Tech by increasing AI companies’ legal immunity.

But we’ve been here before. For many years, cybersecurity researchers feared being punished as hackers if they responsibly tested for and disclosed security vulnerabilities in hardware and software.

At the same time, federal safe harbour legislation for these researchers was never passed because of concerns about malicious or careless actors evading accountability. It took devastating hacks committed by Russian adversaries for the Department of Justice to finally announce a policy in 2022 against prosecuting good-faith cybersecurity research. That is, the law didn’t dissuade the bad guys, but it did scare the good guys from helping stymie the bad guys, with predictable results. The same dynamic is happening now with AI.

We cannot afford years of government inaction again. Congress should hold immediate hearings about the Grok debacle, and it should get to work on a legal safe harbour for responsibly testing AI models for child sexual abuse material. Companies like xAI could be doing more to make their models safer, and there is no time to waste.

Riana Pfefferkorn is a former tech lawyer and a policy fellow at the Stanford Institute for Human-Centered AI. This article originally appeared in The New York Times.

Source: New York Times/sk
Advertisement

Also worth reading

Advertisement