British Tech Firms and Child Safety Agencies to Test AI's Capability to Generate Abuse Content

Technology companies and child protection agencies will be granted permission to evaluate whether AI tools can produce child exploitation images under recently introduced British laws.

Significant Increase in AI-Generated Harmful Material

The declaration came as findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will allow designated AI developers and child protection groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.

"Ultimately about preventing abuse before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the risk in AI models promptly."

Tackling Legal Obstacles

The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that problem by enabling to stop the creation of those images at source.

Legal Structure

The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI models designed to create child sexual abuse material.

Real-World Consequences

This recently, the official visited the London base of Childline and heard a simulated call to advisors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.

"When I hear about children facing extortion online, it is a cause of intense anger in me and justified anger amongst families," he stated.

Alarming Data

A prominent online safety foundation stated that cases of AI-generated exploitation content – such as online pages that may include numerous images – had more than doubled so far this year.

Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are launched," commented the chief executive of the online safety organization.

"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, giving offenders the ability to make potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further commodifies victims' trauma, and makes children, particularly female children, less safe both online and offline."

Counseling Interaction Information

Childline also released details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Employing AI to evaluate weight, physique and appearance
  • Chatbots discouraging children from consulting safe guardians about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked pictures

Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated topics were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were related to mental health and wellness, including using AI assistants for support and AI therapy applications.

Karen Salas
Karen Salas

A passionate esports journalist with over a decade of experience covering competitive gaming and player stories.