What is it?
- The use of technology, such as artificial intelligence (AI) and machine learning, to prevent, detect, block, report and remove illegal and exploitative material, live streaming and grooming. It can also be used to deter and detect offenders and identify victims.
Why is it important?
- Due to the prevalence of existing CSAM online, and the likely extent of undetected CSAM on the dark web, technology (including real time image and video production, image and video distribution, image and video storage) needs to make rapid and extensive progress to identify and remove the existing levels of CSAM online as well as to prevent new cases and respond to active cases.
- Methods to access different children and to commit CSEA online – both in terms of viewing CSAM but also grooming victims for CSEA – are ever changing and evolving. Offenders are likely to collaborate and share methodologies and tactics to bypass safety mechanisms and to share vulnerable profiles or existing abusive content.
- Similarly, on some sites, profile types can be recommended to those who have shown a prior interest in children – in some cases this will be recommending children to adults.
- Technology can be used to deter offenders through targeted messaging, messages when questionable behaviour is identified and signposting to sources of help to address offenders/potential offenders’ behaviour. This can help to create an internet where offenders feel less safe, more inclined not to take risks – and can easily seek help to change.
How can it be implemented?
- Carry out research and identify innovative practice to “build crucial technological tools needed to more effectively prevent and work to eradicate child online sexual exploitation and abuse”.
- Monitor effectiveness of interventions and share as a public good.
- Build on or expand the existing Voluntary Principles to Counter CSEA online to reinforce a set of basic industry standards to be met by all providers.
- Access and utilise existing principles, frameworks and assessment tools that have been developed to ensure safety considerations are embedded into the design, development and deployment of online products and services, like Safety by Design.
- Removal of the harmful content as soon as it is identified should be considered on the grounds that it is likely that the risks and potential harm of keeping the content online are higher than the potential costs, or inconveniences, of unintentionally removing non harmful material. Once the material has been analysed, or re-reviewed (by a person or AI), it can be returned online if deemed unharmful.
- The allocation of dedicated resources and staffing with relevant training to tackle CSEA online.
- Collaboration and sharing of tools between technology firms, civil society and governments.
- Provide internet users with details of how to report illegal material with specific categories for that relating to CSAM and CSEA.
- When appropriate and where available, information on interventions for those who are at risk of offending (for example, providing links to support services) is also critical.
- 5 Country Ministerial,Voluntary Principles to Counter CSEA online.
- Technology Coalition, Project Protect.
- Thorn, Safer.
- A guide for tech companies considering supporting the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse, (Facebook, Google, Microsoft, Roblox, Snap and Twitter).
- The Lucy Faithfull Foundation/Stop It Now! UK and Ireland, online deterrence/EVAC funded IWF/LFF chatbot.
- Australian eSafety Commissioner, Safety by Design
- Child Dignity Alliance, Technical Working Group
- Yoti, Facial Age Estimation White Paper
UNICEF, Encryption, privacy and children’s right to protection
 End Violence Against Children, Safe Online.
 5Rights Foundation, Risk-by-Design microsite.
 Technology Coalition, Project Protect.