British Technology Firms and Child Protection Officials to Test AI's Capability to Create Exploitation Content
Technology companies and child protection agencies will receive permission to evaluate whether artificial intelligence tools can produce child abuse images under recently introduced British legislation.
Significant Rise in AI-Generated Illegal Material
The announcement coincided with findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the authorities will permit designated AI companies and child protection organizations to examine AI systems – the underlying systems for chatbots and visual AI tools – and verify they have adequate safeguards to stop them from creating depictions of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under strict conditions, can now identify the danger in AI systems early."
Tackling Regulatory Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that issue by enabling to stop the production of those images at their origin.
Legal Framework
The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI systems developed to generate child sexual abuse material.
Real-World Consequences
This recently, the official visited the London base of a children's helpline and listened to a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of himself, constructed using AI.
"When I learn about children experiencing extortion online, it is a source of intense frustration in me and justified anger amongst families," he said.
Alarming Data
A leading internet monitoring foundation reported that instances of AI-generated abuse content – such as webpages that may include numerous images – had significantly increased so far this year.
Instances of category A content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a crucial step to guarantee AI tools are secure before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so victims can be victimised all over again with just a few clicks, giving criminals the ability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally commodifies survivors' trauma, and renders children, particularly girls, more vulnerable both online and offline."
Support Session Information
The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to evaluate weight, body and looks
- AI assistants dissuading children from consulting safe adults about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapeutic applications.