The rise of AI-generated CSAM is blurring reality and fueling exploitation. With deepfakes becoming more sophisticated, identifying and combating this abuse is crucial. Discover how forensic tools like Amped Authenticate are helping investigators verify content, fight back, and protect victims.

Have you ever looked at an image, video, or speech and wondered, “Did that really happen, or was that AI-generated”? More and more, the answer is you probably do. Social media, ads on television, and even emails that hit your inbox are inundated with Generative AI and the “slop” (technical terms for worthless AI work) from so many of these tools. Among these concerns, the rise of AI-generated CSAM (Child Sexual Abuse Material) poses a significant threat, blurring the lines of reality and morality. How can we tell what is real anymore when realistic “deepfakes” are so easy to create?
In the last 7 to 10 years, there has been a rise in AI-generated material. This was due, in part, to the relatively low cost of data processing and the high demand for making creative tasks easier, creating a market for new tools. Many of those are helpful and, when used properly, give people the ability to create things they never would have been able to make with just a few keywords. Recently, I posted a video on social media of an AI-generated version of me reading a script written by AI about the dangers of AI. It doesn’t quite sound like me or look like me, but it is pretty convincing, IYKYK.
At the same time, the same tools that can help bring out new ideas and allow the general public to create can also be used to hurt people in devastating ways. The rise of AI-generated pornography has led to people becoming a victim of abuse, blackmail, and destroyed reputations. A famous example of this came when pop star Taylor Swift fell victim to fake nude photos of her. This horrific act raised the level of alarm all the way to the United States Congress, which helped start a conversation about what could be done with these generated images. (Rose et al. 2024)
However, there is one group of people who many overlook but are victimized at an alarming rate. The rise of AI-generated CSAM has led to thousands of children often unknowingly becoming victims of sexual exploitation. (“Generative AI (GAI)”, n.d.) These victims are frequently exploited by strangers who find innocent images of them on the internet or even by people they know and trust (e.g., family members and classmates). So what can be done to ensure their innocence is preserved and those who perpetrate these images are convicted? All of that comes down to defining the imagery, authenticating its content and context, and pointing it to a legal statute.
Identifying a Victim in Deepfake Images
While posting AI-generated CSAM has been a crime throughout the United States and most jurisdictions worldwide, the advent of Generative AI to “create” images and videos is somewhat new. In the case of AI-generated CSAM, it can become essential to understand how the image was made and whether a reference image or video was used during its generation. The National Center for Missing and Exploited Children (NCMEC) helpfully highlights an important point: any portion of an image that uses a reference image to create explicit material, such as “nudifying” a clothed child or placing a child in a compromising position, victimizes that child, potentially without their knowledge. (“Generative AI (GAI)”, n.d.)
This new form of exploitation has increased in prominence over the last 15 years. It grew from 100k reported digitally created images in 2010 to over 36 million reported images in 2023. Victims whose faces have been added to explicit material can cause psychological and emotional harm, even without those images being used for bullying, harassment, and even grooming.
Adding to this alarming trend, new research highlights a concerning forensic challenge: CSAM images of real children can now be manipulated to appear artificially generated, making the detection of illegal content or offenders more difficult. A study from Politecnico di Milano1, Italy, demonstrates how actual explicit images can be altered using Stable Diffusion (SD) models to introduce synthetic characteristics, effectively disguising them as AI-generated content.
This development underscores the urgent need for more advanced detection methods and legal frameworks to keep pace with the rapid evolution of generative AI and its potential misuse.
Laws Surrounding Deepfake ICAC Images
It has been a bit tricky identifying laws that can be applied to this explicit material. While many laws were written to encompass camera original images or other images created with a camera and then posted elsewhere, laws around fully digital or AI-generated material have been only recently implemented. In the US, only two states (Texas and Louisiana) had laws on the books that equated digitally generated “deepfake” material to other CSAM material before 2024. Since then, a total of 20 states now equate AI-generated images to that of other sexually explicit material. Several of these also added increased severity for those generating and distributing that material, making the origin of the files a vital piece of evidence. (“New State Laws Address Sexual Deepfakes of Minors” 24)
There is still work to do as countries start to address this issue. Some nations, such as the UK, South Korea, Australia, and Canada, have made posting or possessing non-consensual videos of any type (which would include all videos of children) a criminal office. As of 2022, the EU listed AI-generated images as material to be removed as disinformation. Still, other countries, including the US, have not passed legislation addressing AI-generated explicit material, leaving the courts to decide and create a precedent. (Lawson 2023)
Detecting Camera Original Images vs Generative AI-Generated Images
So, how can we tell what is real, what is fake, and what is created using a reference? That is where tools like Amped Authenticate can help. Examining the structure, metadata, compression schema, and image content, examiners can see if the file was originally created with a camera or with a Generative AI tool. Looking for signs of double compression within the image may show signs of tampering, where, for example, a face was replaced, details were added or deleted, and the image’s content matches the context it represents.
Outside of tamper detection and context analysis, the structure and data of an image or video can point us toward the original source, down to the camera’s make and model or software used. But thanks to tools such as PRNU, examiners can identify an image or video down to the original lens and device used if a camera was involved.
Whether a file is submitted or is acquired, the prominence of Generative AI tools can lead to questions about how true and accurate the evidence is. Using details in the file or the composition of the imagery, Amped Authenticate can also help point to signs of Generative AI usage. All of these steps can help you determine what is real and what can be trusted in the evidence you collect. As our world starts to become more aware of the dangers of AI-generated images, people are becoming more leery of the information they see. Having tools that can help clear the confusion and provide clarity to your evidence, like Amped Authenticate, goes a long way to making sure every pixel is verified before it goes to court.
Protecting the Vulnerable Against AI-generated CSAM
As AI-generated CSAM continues to rise, the need for legal reforms, victim protections, and advanced detection technologies has never been more urgent.
Amped is also participating in key industry events dedicated to equipping professionals with the knowledge and skills to combat crimes against children. Come and meet us at:
- 37th Annual Crimes Against Children Conference – Dallas CCAC
- Northwest International Crimes Against Children (NW ICAC)
Stay tuned for more information about Amped Software presentations at these events as they approach.
- Mandelli, Sara, Paolo Bestagini, and Stefano Tubaro. “When synthetic traces hide real content: Analysis of stable diffusion image laundering.” 2024 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2024. ↩︎