Skip to main content
Hide this page

AI Generated CSAM

AI Generated Child Sexual Abuse Material Doesn’t Make the Harm Less ‘Real’.


Technology-assisted child sexual abuse (TACSA) is child sexual abuse and research shows it can be just as harmful as abuse involving in-person, perpetrator-contact.

 

So where does AI come in?

AI can be used to create or alter child sexual abuse imagery - including (apps or services that digitally create a nude or sexualised image from a normal photo).  In some cases, it can also be used to generate additional abusive imagery of known victims, using their likeness.

This can leave children vulnerable in in some cases unaware of the AI images that have been created. It can also create in the child a feeling less in control, ashamed, frightened, and less able to ask for help, especially if they feel they won’t be believed because the image isn’t ‘real’.

Whilst AI can be misused by perpetrators to amplify existing abuse tactics and add new ways to cause harm, the safeguarding fundamentals stay the same. Don't worry if you don't understand the technology - focus on the child and follow your usual safeguarding routes.

Ways AI May be Misused to Cause Harm


Person using AI to manipulate image of a real child

Manipulating images of a real child


AI can be misused to create or alter sexual abuse imagery using a child’s likeness. Some people try to minimise this by claiming AI-generated or AI-altered imagery is ‘less harmful’. This is untrue - AI-enabled CSAM can cause significant harm to real children.

It can also reinforce harmful sexualised perceptions of children and, in some cases, can re-victimise known victims by generating further material using their likeness - or create new victims when a real child’s image is manipulated into abusive content.

Threats and blackmail


In some cases, offenders use AI-generated or AI-altered imagery to coerce or blackmail a child - including demands for money (sometimes referred to as financially motivated sexual extortion), further abusive material, or continued engagement. This can occur even where no image was initially shared by the child, because imagery may be generated or manipulated from existing photographs.

Worried teen girl reading threatening message on mobile phone


Two teenagers lauging at image on mobile phone

Child-on-Child Harm


In some cases, under 18’s may use so called ‘nudifying’ tools to create images of their peers thinking it is ‘just a joke’. They might also do so to deliberately cause upset and distress. 

Trauma-aware Communication


Children may not disclose, or they may be unaware there is an AI image of them being shared.  When they do tell, it may not be clear or immediate. Fear, shame, guilt, not recognising abuse, or fear of consequences can all be barriers. What a child is communicating may show up through behaviour (withdrawal, distress, avoidance), as well as through words. Your job is to keep the door open.

What helps:

  • Safety and support first, detail later: don’t make support conditional on a full account.
  • Be clear about what happens next: who will know, what will be recorded, and why.
  • Avoid blame language (“why did you…”) and avoid responses that increase shame. Refer to MCF’s Victim Blaming Language Guidance (link) for trauma-aware alternatives that keep the blame with the perpetrator.
  • Keep welfare central throughout: support is not secondary to ‘finding out what happened’.
  • Reassure the child as they may be worried about talking to you.

 

Language note:
The term 'deepfake' can be unhelpful because it suggests the image isn’t ‘real'. If a real child’s image is used and altered, the child is real and the harm is real - recognise it as child sexual abuse imagery and respond accordingly.

What to Do


This is a short, practical guide to the key steps you can take to help you respond, get support, and take action.

 

 

Step 1: Support First Response

(what to say in the first minute)

Use short, supportive lines that reduce shame and increase help-seeking:

  • “Thank you for telling me. I’m glad you said something.”
  • “You’re not in trouble.”
  • “You don’t have to explain everything right now - we can take this step by step.”
  • “I’m sorry this is happening. You don’t deserve it.”
  • “We’ll work out what happens next together.”

 

If AI manipulation is mentioned: Avoid describing images as ‘fake’ or ‘not real’ (this can minimise impact). Use language like ‘AI-generated’ or ‘AI-altered’ instead and focus on the child’s experience and safety.

 

Avoid: “Why did you…?” / “You should have…” This often lands as blame and can increase silence.

 

Step 2: Preserve key information and record proportionately

  • Do not share/download/save/copy/print imagery - even for reporting; this may be illegal.
  • Encourage the child/young person not to delete messages or relevant evidence straight away (usernames, URLs, timestamps, platform/app names).
  • Record what’s needed: times/dates, what was said (in the child’s words where possible), actions taken, decisions/rationale, and who was involved - without taking copies of imagery.

 

Step 3: Follow established routes

  • Do not investigate informally. Follow your safeguarding policy and notify the named safeguarding lead in your setting immediately.
  • Then, safeguarding leads would need to consider:
  • Notify parents (where safe) of the incident.
  • As an offence has occurred the police should be notified (101, or 999 if a child is in immediate danger), and the child may require additional safeguarding support from Childrens Social Care.
  • Report to the platform/app using its reporting route.
  • Consider Report Remove (for under-18s) where a sexual image/video has been shared: (link)

 

Step 4: Ongoing support

Support needs don’t end when an incident is reported. Children and young people may experience ongoing distress, fear, shame, confusion, and loss of - particularly where imagery is involved - so build in planned check-ins and support.

 

For specialist guidance and resources:

  • Contact Marie Collins Foundation (MCF) for advice on trauma-aware, survivor-centred responses and pathways. Call us on 01765 688827 or email help@mariecollinsfoundation.org.uk
  • Use our POWER resource to strengthen practice and avoid responses that unintentionally increase harm: Link
  • Consider MCF training: Click Path to Protection for safeguarding teams working with TACSA: Link

 

Keep in Touch