- By Joe Tidy
- Cyber correspondent
2 hours ago
A leading children’s charity is calling on Prime Minister Rishi Sunak to tackle AI-generated child sexual abuse images when the UK hosts the first Global AI Safety Summit this autumn.
The Internet Watch Foundation (IWF) removes abusive content from the web and says AI images are on the rise.
Last month, the IWF began recording AI images for the first time.
He discovered predators from all over the world sharing galleries of sometimes photorealistic images.
“We don’t see these images in large numbers right now, but it’s clear to us that criminals can produce unprecedented amounts of realistic child sexual abuse images,” said Susie Hargreaves, chief executive of the IWF.
The BBC showed redacted versions of some of the footage, which showed girls around five years old posing naked in sexual positions.
The IWF is one of only three charities in the world authorized to actively search for child pornography online.
It began recording AI images on May 24 and says that as of June 30, analysts had investigated 29 sites and confirmed seven pages sharing galleries containing AI images.
The charity has not confirmed the exact number of images, but says dozens of AI images were mixed with actual abuse material shared on the illegal sites.
Some of them were what experts classify as Grade A images – the most graphic possible, illustrating penetration.
It is illegal to create child sexual abuse images in almost all countries.
“We now have a chance to get ahead of this emerging technology, but legislation needs to take this into account and must be fit for purpose in light of this new threat,” Ms Hargreaves said.
In June, Mr Sunak announced plans to host the first global AI security summit in the UK.
The government promises to bring together experts and lawmakers to examine the risks of AI and discuss how they can be mitigated through internationally coordinated action.
As part of their work, IWF analysts record trends in abuse images, such as the recent increase in so-called “self-generated” abuse content, where a child is coerced into sending videos or images of himself to predators.
The charity fears that AI-generated images are a growing trend, although the number of images discovered still represents only a fraction of other forms of abusive content.
In 2022, the IWF recorded and attempted to take down more than 250,000 web pages containing child sexual abuse images.
Analysts also recorded conversations on forums between predators, who shared tips on how to create the most realistic images of children.
They found guides on how to trick AI into drawing images of abuse, and how to download open-source AI models and remove guardrails.
While most AI image generators have strict built-in rules to prevent users from generating content with prohibited words or phrases, open source tools can be downloaded for free and modified as the user wishes.
The most popular source is the stable broadcast. Its code was published online in August 2022 by a team of German AI academics.
The BBC spoke to an AI image creator who uses a specialized version of Stable Diffusion to create sexualised images of pre-teen girls.
The Japanese man claimed his “cute” images were justified, and said it was “the first time in history that images of children can be made without exploiting real children”.
However, experts say the footage has the potential to cause serious harm.
“There’s no question in my mind that AI-generated images are going to heighten those predilections, reinforce that deviance, and that’s going to lead to greater harm and greater risk of harm to children around the world,” he said. said Dr. Michael Bourke, who specialized in sex offenders and pedophiles for the United States Marshals Service.
Professor Bjorn Ommer, one of the main developers of Stable Diffusion, defended the decision to make it open source. He told the BBC that hundreds of academic research projects have grown out of it, including many successful businesses.
Professor Ommer suggests this justifies his decision and that of his team, and insists that stopping research or development is not the right thing to do.
“We really have to face the fact that this is a global and global development. Stopping it here would not stop the development of this technology on a global scale in other countries which are probably companies We need to find mitigating measures to consider this development that we see going fast,” he said.
Stability AI, which helped fund the model’s pre-release development, is one of the most prominent companies creating new versions of Stable Diffusion. He declined to be interviewed, but has previously said he prohibits any misuse of his versions of AI for illegal or immoral purposes.