IWF reports sharp rise in AI‑generated child sexual abuse material online

Print

London: The Internet Watch Foundation (IWF) has reported that the amount of AI-generated child sexual abuse material found online rose by 14 per cent in 2025, with the majority of videos showing the most extreme type of content under UK law.

Across the year, IWF identified 8,029 AI-generated images and videos depicting realistic child sexual abuse material (CSAM). Of the 3,443 videos analysed, 65 per cent of the were classified as category A, the term for the most severe material classification.

In comparison, 43 per cent of non-AI videos fell into the same category, highlighting the escalating severity of AI-generated material.

The government has announced new measures that will allow designated AI companies and child safety organisations to examine generative AI models, in order to strengthen protections and prevent the creation of illegal content.

Sean McConnell, GovTech Lead at Datactics said: “The increase in AI-generated child sexual abuse material reflects a growing recognition that this is not just a content moderation issue, but a data infrastructure challenge. As harmful content becomes easier to produce and distribute, it can scale rapidly across platforms, requiring systems capable of detecting and responding to risk in real time."

"For these measures to deliver meaningful protection, technology providers need to strengthen the quality and use of their data to improve how harmful content is detected and prevented from reappearing. With better data practices and oversight in place, platforms can move beyond simply reacting to content and start identifying patterns earlier, ensuring faster intervention and safer online environments.”

The UK-based IWF, which operates a hotline and monitors child sexual abuse content globally, said offenders are also discussing the possibilities for using “agentic” AI systems, which can carry out tasks autonomously.

Heather Barnhart, Cellebrite Senior Digital Forensics Expert and SANS Curriculum Lead, commented: “The sharp rise in AI-generated child sexual abuse material shows just how rapidly this threat is evolving. AI systems are increasingly becoming more powerful and accessible, and it is essential that child safety remains the top priority through robust safeguards built in alongside greater awareness and education at home.”

“AI tools are becoming part of everyday life, and parents need to take an active role in guiding children on appropriate use. This gets at the larger issue, which is parents having open conversations about the dangers that lurk online - and having concrete guardrails on who their kids are interacting with and what they're sharing. With proactive monitoring, education and healthy engagement, families can help their children navigate the online world, including AI tools, responsibly and safely.”