AI-generated child sexual abuse content flood the online space, bogging down law enforcement agencies
Easy-to-use AI tools have allowed child predators to become more creative in coming up with illegal content.

This audio is generated by an AI tool.
SAN FRANCISCO: Child predators are increasingly using artificial intelligence to create sexually explicit images of children.
The online sexual exploitation of children has become a growing problem, and generative AI is bogging down law enforcement agencies seeking to crack down on such content.
This comes as easy-to-use AI tools have given offenders more content, including deepfaked images or videos based on photos of a real child.
A recent report by Stanford University’s Internet Observatory has found hundreds of child sexual abuse images in an open dataset used to train popular AI-image generation models.
“Anything that they can imagine, anything that they want, they can create an image out of it,” said Associate Professor Bryce Westlake of the San Jose State University’s Department of Justice Studies.
“And so this is creating a lot of challenges for law enforcement.”
EXPLOSION OF CHILD SEXUAL ABUSE CONTENT
Earlier this year, the United States senate summoned the heads of some of the world's biggest tech companies to testify about their platforms’ efforts to protect children from sexual exploitation online.
“We've seen child sexual abuse material become very prevalent on social media, on private websites, (and) basically anywhere that people congregate,” said Dr Westlake.
Last year, the National Center for Missing and Exploited Children's CyberTipline received more than 36 million reports of suspected child sexual abuse material.
Observers said the consequences for victims and their families can be devastating.
At the same time, some experts told CNA that AI has made identifying actual victims more difficult.
AI has, however, emerged as a double-edged sword - it has also become part of the tools used by law enforcement investigators and organisations looking to fight the problem.
Dr Marcel Van der Watt, senior vice president of impact and operations at the National Center on Sexual Exploitation, said: “We refer to it as augmented intelligence … AI-generated or created chatbots which engage in the real world with predators and sex traffickers online.”
STOPPING THE EXPLOITATION OF CHILDREN ONLINE
Experts said the issue could become worse if something is not done to put controls on AI tools.
North American lawmakers have proposed measures such as the Kids Online Safety Act in the US and the Online Harms Act in Canada, which seek to hold social media firms accountable for harmful content they host, often generated by AI.
But it remains to be seen how these will be enforced.
As the potential threat from AI grows, organisations working to stop the exploitation of children want to see even stronger action such as requiring more identity verification online.
They noted that the cost of not taking action is too much to bear.
“The reality is there are privacy-preserving ways of doing verification for access, not just to adult pornography, but to areas that are harmful to children,” said Mr Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection.
“We should have been doing this so long ago and we haven't regulated it, therefore it hasn't happened. And there's a lot of pushback to not do that because it would cost companies money.”