GLOBAL THREAT ASSESSMENT 2023
Worldwide increase in child sexual exploitation and abuse online
Since our 2021 Global Threat Assessment, internet usage has continued to increase. While this brings benefits, it also exposes children to a wide range of online risks, including sexual exploitation and abuse.
Our 2023 Global Threat Assessment shows that child sexual exploitation and abuse online keeps escalating worldwide, in both scale and methods. The volume of child sexual abuse material reports analysed by the National Center for Missing and Exploited Children (NCMEC) has increased by 87% since 2019.
The Disrupting Harm research revealed that as many as 20% of children in some countries were subjected to child sexual exploitation and abuse online in the past year.
New emerging technologies like generative AI and eXtended Reality pose new risks for the safety of children online. New trends emerge, such as financial sexual extortion, while threats like online grooming or child ‘self-generated’ sexual material continue to grow.
New and emerging risks to children online
Financial sexual extortion
The FBI issued a public safety alert about an ‘explosion’ of financial sexual extortion schemes targeting children and teens.
Extorters pose as young girls online and predominantly approach boys aged between 15-17 years via social media, proposing the exchange of sexually explicit imagery. Once sexually explicit imagery is sent, the extorter threatens to send the imagery to the child’s friends and family, blackmailing them for money.
AI-generated sexual abuse imagery
Generative AI is revolutionising the way people interact with the digital world. Unlike traditional AI systems that recognise patterns and make predictions, generative AI creates new content: images, text, audio, and more.
Cases of perpetrators using generative AI to create child sexual abuse material and exploit children have been increasing.
Abuse via eXtended Reality (XR)
- creating new opportunities for offenders to access victim-survivors;
- simulating abuse on virtual representations of children.
Meanwhile, established forms of abuse grow and evolve
Grooming children online
The volume of cases of online grooming and coercing are increasing. Data from the National Society for the Prevention of Cruelty to Children (NSPCC) shows online grooming reports have risen by 80% in the past four years.
Off-platforming is a common technique used by groomers online, consisting in moving conversations to a private messaging app or an end-to-end encrypted environment due to the lower risk of detection. Perpetrators use “off-platforming” to groom children online, but also to network and share abuse material.
The 2023 Global Threat Assessment also includes data on online grooming in social gaming environments, showing that 45 minutes is the average time for a high-risk child grooming situation to develop in social gaming environments, but this can be as quick as 19 seconds.
‘Self-generated’ sexual material
The definition of ‘self-generated’ sexual material includes a broad span of images or videos from voluntarily ‘self-generated’ material that is consensually shared between adolescent peers.
Harm is typically caused when imagery is re-shared against a young person’s wishes – coerced ‘self-generated’ sexual material – which includes grooming, pressure, or manipulation to share material.
Livestreaming child sexual abuse
In many countries, this form of abuse is not criminalised. Even where it is criminalised, there is little evidence unless the livestreaming was recorded. Additionally, most platforms do not monitor private live-streaming.
This explains why the livestreaming of child sexual abuse is often difficult to investigate or prosecute.
Accessing, viewing, and sharing child sexual abuse material
Petrators use different methods to share and access child sexual abuse material on the surface web, include ‘link-sharing’, where they share original, shortened, or modified URLs, avoiding detection by ‘hash-matching’ technology.
Another emerging challenge for technology companies and policymakers is the viewing and sharing of legal imagery of children for sexual gratification.
In 2022, 90% (228,927) of URLs identified by the Internet Watch Foundation as displaying child sexual abuse material were on openly accessible, free-to-use image hosting services.
Due to the lower risks of detection, perpetrators often prefer private environments, such as end-to-end encryption (E2EE) messaging services.
The most common pathway for perpetrators to come upon information about the dark web on the surface web, for example when searching for child sexual abuse material.
One dark web forum post related to child sexual abuse identified by the US Department of Justice was viewed 1,025,680 times in 47 days (21,822 views per day).