British Tech Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence tools can produce child exploitation material under new British laws.
Significant Rise in AI-Generated Harmful Content
The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the government will allow approved AI companies and child safety groups to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from creating images of child sexual abuse.
"Ultimately about stopping exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under strict conditions, can now detect the risk in AI models early."
Addressing Regulatory Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by enabling to halt the creation of those images at their origin.
Legislative Structure
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI models designed to generate exploitative content.
Real-World Consequences
This recently, the official visited the London base of Childline and heard a simulated call to counsellors involving a report of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a sexualised deepfake of himself, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense frustration in me and rightful anger amongst parents," he stated.
Alarming Statistics
A leading online safety foundation stated that cases of AI-generated exploitation content – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a crucial step to guarantee AI products are safe before they are released," commented the head of the online safety foundation.
"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, giving criminals the ability to create possibly limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which further exploits survivors' suffering, and renders young people, particularly female children, more vulnerable on and off line."
Support Session Information
Childline also published information of support sessions where AI has been referenced. AI-related harms mentioned in the sessions include:
- Employing AI to rate body size, physique and looks
- Chatbots dissuading children from consulting trusted guardians about harm
- Being bullied online with AI-generated material
- Digital blackmail using AI-faked images
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapy apps.