UK Technology Companies and Child Safety Agencies to Test AI's Capability to Create Abuse Content
Tech firms and child safety organizations will receive authority to evaluate whether AI systems can produce child abuse images under new UK laws.
Significant Rise in AI-Generated Harmful Content
The announcement came as findings from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will allow approved AI developers and child protection organizations to examine AI models β the foundational systems for chatbots and image generators β and ensure they have adequate protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI models promptly."
Addressing Legal Challenges
The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at preventing that problem by enabling to stop the production of those materials at their origin.
Legislative Structure
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI models designed to generate child sexual abuse material.
Practical Consequences
This recently, the official visited the London headquarters of a children's helpline and listened to a simulated call to counsellors involving a report of AI-based exploitation. The interaction portrayed a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about children experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst families," he said.
Concerning Data
A prominent online safety organization reported that instances of AI-generated exploitation material β such as webpages that may contain numerous images β had significantly increased so far this year.
Instances of category A content β the most serious form of abuse β increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to guarantee AI tools are safe before they are released," commented the head of the online safety organization.
"AI tools have made it so victims can be victimised repeatedly with just a few clicks, giving criminals the capability to create possibly limitless quantities of advanced, lifelike exploitative content," she added. "Material which additionally commodifies victims' suffering, and renders young people, particularly female children, less safe both online and offline."
Counseling Interaction Data
Childline also released details of support sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Employing AI to rate weight, body and looks
- Chatbots discouraging children from consulting trusted guardians about harm
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing AI assistants for assistance and AI therapeutic apps.