Elara is a financial strategist with over a decade of experience in wealth management and entrepreneurship, dedicated to empowering others.
Technology companies and child protection organizations will receive permission to assess whether AI tools can produce child abuse images under new UK laws.
The announcement came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow approved AI developers and child safety organizations to inspect AI models – the underlying technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from creating depictions of child exploitation.
"Ultimately about preventing exploitation before it happens," declared Kanishka Narayan, adding: "Experts, under strict protocols, can now detect the danger in AI systems early."
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that issue by helping to halt the creation of those images at their origin.
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or sharing AI models developed to create child sexual abuse material.
This week, the official visited the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors featuring a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I learn about children facing extortion online, it is a cause of extreme anger in me and justified anger amongst parents," he stated.
A prominent internet monitoring foundation stated that instances of AI-generated abuse content – such as webpages that may include numerous files – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a crucial step to guarantee AI tools are safe before they are released," stated the head of the internet monitoring organization.
"AI tools have enabled so survivors can be victimised all over again with just a few clicks, giving offenders the capability to make potentially endless quantities of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies victims' suffering, and makes children, especially girls, less safe both online and offline."
The children's helpline also published details of support sessions where AI has been referenced. AI-related harms mentioned in the sessions include:
During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using AI assistants for assistance and AI therapeutic apps.
Elara is a financial strategist with over a decade of experience in wealth management and entrepreneurship, dedicated to empowering others.