Artificial intelligence could be used to generate “unprecedented amounts” of realistic child sexual abuse material, an online safety group has warned.
The Internet Watch Foundation (IWF) said it already finds it “surprisingly realistic” AI-makes images that many people would find “indistinguishable” from the real ones.
The web pages the group investigated, some of which were flagged by the public, featured children as young as three years old.
The IWF, which is responsible for finding and removing child sexual abuse material on the internet, warned they were realistic enough that it becomes harder to spot when real children are in danger.
IWF Chief Executive Susie Hargreaves called on the Prime Minister Rishi Sunak to treat the issue as a “top priority” when Britain hosts a global AI summit later this year.
She said: “We are not currently seeing these images in large numbers, but it is clear to us that criminals have the potential to produce unprecedented amounts of realistic child sexual abuse images.
“It would be potentially devastating to internet safety and to the safety of children online.”
Risk of AI images “rising”
Although AI-generated images of this nature are illegal in the UK, the IWF said rapid advances in technology and increased accessibility meant the scale of the problem could soon make it difficult to maintain. the law.
The National Crime Agency (NCA) said the risk was “increasing” and was being taken “extremely seriously”.
Chris Farrimond, Director of the NCA’s Threats Directorate, said: “There is a very real possibility that if the volume of AI-generated material increases, it could have a significant impact on the resources of the forces of the armed forces. ‘order, increasing the time it takes us to identify real children in need of protection’.
Sunak said the next global summit, scheduled for the fall, will discuss regulatory “guardrails” that could mitigate future risks posed by AI.
He has already met major players in the industryincluding the figures of Google as well as ChatGPT OpenAI manufacturer.
A government spokesperson told Sky News: “AI-generated child sexual exploitation and abuse content is illegal whether it depicts a real child or not, which means companies Technologies will be required to proactively identify content and remove it under the Online Safety Bill, which is designed to keep pace with emerging technologies like AI.
“The Online Safety Bill will require businesses to take proactive measures to combat all forms of child sexual abuse online, including grooming, live streaming, sexual abuse material of children and prohibited images of children – under penalty of huge fines.”
Learn more:
AI, a “threat to democracy”
Why transparency is crucial for the future of AI
Please use Chrome browser for more accessible video player
8:11
Sunak hails the potential of AI
Offenders help each other with AI
The IWF said it also found an online ‘manual’ written by offenders to help others use AI to produce even more realistic images of abuse, bypassing the security measures put in place by the generators. pictures.
Like text-based generative AI like ChatGPT, imaging tools like DALL-E 2 and Midjourney are trained on data from the internet to understand prompts and deliver appropriate results.
This content is provided by Loud speaker, which may use cookies and other technologies. To show you this content, we need your permission to use cookies. You can use the buttons below to change your preferences to enable Loud speaker cookies or to allow these cookies only once. You can change your settings at any time via the Privacy options.
Unfortunately, we have not been able to verify whether you have consented to Loud speaker cookies. To view this content, you can use the button below to allow Loud speaker cookies for this session only.
Click to subscribe to Sky News Daily wherever you get your podcasts
DALL-E 2, a popular image generator from the creator of ChatGPT OpenAI, and Midjourney both say they limit their software’s training data to restrict its ability to create certain content and block certain text input.
OpenAI also uses automated and human monitoring systems to guard against abuse.
Ms Hargreaves said AI companies must adapt to ensure their platforms are not exploited.
“Continued misuse of this technology could have profoundly dark consequences – and could see more and more people exposed to this harmful content,” she said.