Tech companies and child safety agencies in the UK will now be allowed to test artificial intelligence tools to ensure they cannot produce child abuse images. The new law aims to prevent the creation of harmful content and strengthen online child protection.
The announcement comes as a government watchdog revealed that reports of AI-generated child sexual abuse material (CSAM) more than doubled over the past year, rising from 199 in 2024 to 426 in 2025. Experts say the law change will allow them to examine AI models, such as those behind chatbots like ChatGPT and image generators including Google’s Veo 3, to check for safeguards that stop them from creating illegal content.
Kanishka Narayan, the minister for AI and online safety, said the move is “ultimately about stopping abuse before it happens.” He added that experts, under strict conditions, can now spot potential risks in AI models early and address them before they reach the public.
Previously, testing AI for abusive content was illegal, as creating or possessing CSAM is a criminal offence. Authorities had to wait until illegal images appeared online before taking action. The new law allows controlled testing in a safe and legal way, preventing abuse at its source.
These changes are part of amendments to the crime and policing bill, which also introduces a ban on creating, possessing, or distributing AI models designed to generate child sexual abuse material. The government emphasizes that the law aims to make AI safer for children while allowing companies and agencies to identify risks before products are released.
Narayan recently visited the London headquarters of Childline, a helpline for children, to hear a demonstration of a counselling call involving AI-based abuse. The call depicted a teenager seeking help after being blackmailed with a sexualised deepfake of himself. “When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents,” Narayan said.
The Internet Watch Foundation, which monitors CSAM online, reported that AI-generated abuse material has risen sharply in 2025. The most serious category A material increased from 2,621 to 3,086 images or videos. Girls were overwhelmingly targeted, representing 94% of illegal AI images, while depictions of children aged newborn to two years rose from five in 2024 to 92 in 2025.
Kerry Smith, chief executive of the Internet Watch Foundation, described the law as a vital step to ensure AI products are safe before release. “AI tools have made it possible for survivors to be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” she said. “This material further commodifies victims’ suffering and makes children, particularly girls, less safe both online and offline.”
Childline also shared details from counselling sessions mentioning AI harms, including children being rated on weight, body, and appearance by AI tools, chatbots discouraging them from talking to safe adults about abuse, online bullying using AI-generated content, and blackmail involving AI-faked images.
Between April and September 2025, Childline delivered 367 counselling sessions mentioning AI, chatbots, or related terms. This is four times the number recorded in the same period last year. About half of these sessions were related to mental health and wellbeing, covering the use of AI for support or therapy apps.
The UK government’s move is designed to address these rising risks, giving experts the legal ability to test AI safely and prevent the creation of harmful content. By allowing controlled examination of AI models, the law aims to reduce the potential for abuse while keeping children safer online.






