Skip to main content

Deepfake Forensics Is Much More Than Deepfake Detection!

Reading time: 6 min

Deepfake detection is everywhere, but it’s only part of the solution. In this post, we focus on the broader discipline of deepfake forensics: a layered, explainable approach that goes beyond black-box AI tools to uncover how, where, and why a media file was manipulated. You’ll see why detection alone often falls short and how Amped Authenticate brings true forensic rigor to the fight against synthetic media.

A group of serious protesters hold signs in a street demonstration focused on deepfake technology. The central figure holds a sign stating, "Deepfake forensics is much more than deepfake detection!" Surrounding signs reference AI platforms like OpenAI, Midjourney, Stable Diffusion, Flux, and Ideogram, emphasizing the broader scope of forensic analysis in synthetic media.

The global deepfake detection market is exploding. Different market researches estimate a Compound Annual Growth Rate (CAGR) of about 35-50% and an expected market size of between 5 and 13 billion dollars over the next decade. If you search on Google, you will find countless pages of deepfake detection tools, many of which are backed by notable investments. Yet, it’s been shown repeatedly that they are far from bulletproof.

We are often asked if we do deepfake detection, too. We do, but it is just a small part of what I like to call “deepfake forensics”. Let’s dive together into this topic!

What Is a Deepfake?

Let’s start with the question: What is a deepfake?

Definitions vary by law, purpose, and jurisdiction, but usually refer to “AI-generated or altered media designed to mislead”. In the US, there is no single, official, nationwide legal definition. In the EU, the AI Act defines it as follows: “‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.”

These definitions indicate that the technology used to create a deepfake is not the only defining factor; the creator’s intent or the effect on the viewer holds even greater significance.

A Deepfake Is Just an Image

We shouldn’t forget that a deepfake is just an image, video, or audio file. For these types of media, we have developed countless tools long before deepfakes were a thing. On the other hand, most tools designed explicitly for deepfake detection are based on AI. As such, they exploit the power of this technology but also inherit its bias and poor explainability.

Traditional multimedia forensics algorithms rely on mathematical models to analyze traces within images, determining their originality, manipulation, or synthetic generation. Despite their potential complexity, experts can explain these algorithms in court. When sufficient data is available, analysts can be confident in their findings, particularly when evidence of tampering exists.

In contrast, data-based algorithms, like those employed in AI-based deepfake detection, are trained on extensive image datasets for classification. The training process inherently carries the risk of introducing bias into the results. Although AI explainability and interpretability are active research areas, today’s top-performing models are often so large and intricate that they operate as “black boxes.”

Another issue is that AI deepfake detection tools may struggle when they encounter data types they haven’t seen during the training phase.

Furthermore, when they give a score, they don’t provide a “probability” of the results (or if they do, it’s usually not the forensic interpretation of probability) but a “confidence” in the results. As such, they can be confident (even at 100%) and still be wrong.

Fortunately, traditional model-based algorithms often remain effective on deepfakes, much as they were with conventional digital photographs. Consider shadows analysis, as an example. Originally designed to detect manual image alterations made with photo editing tools, it performs well on images modified or generated by AI and it is based on clear, explainable principles of physics.

Deepfake detection alone may not be enough for judicial use. It’s hard to go to court and say, “This picture is fake because AI told me so.” However, it is an excellent tool for experts who understand its potential and limitations. They can use it for triage and as a decision support resource, alongside other tools that allow for corroborating the results in different ways. Enter deepfake forensics.

How Would You Explain the Concept of Deepfake Forensics?

As a test, I asked this very same question to a few different LLMs: ChatGPT, Claude, Gemini, and Mistral.

Most of them were decent but not great. ChatGPT’s one, though, was pretty spot on. Maybe I could have changed a few words, but it was so good I decided to copy it verbatim.


Deepfake forensics is the field dedicated to detecting, analyzing, and understanding synthetic media, especially images, videos, and audio that have been manipulated or entirely generated using artificial intelligence (AI) techniques like GANs or diffusion models.

Unlike general deepfake detection tools – which often provide a black-box yes/no answer based on AI classifiers – deepfake forensics focuses on gathering and interpreting multiple types of forensic evidence to evaluate the authenticity and origin of media. It’s a forensic process rather than a mere classification task.

Key Aspects of Deepfake Forensics:

  1. Multi-layered analysis:

    • AI-based detection: Uses deep learning to spot inconsistencies typical of known deepfake generation methods.

    • Signal-based forensic tools: Examines compression artifacts, inconsistencies in lighting, shadows, focus, sensor noise, or encoding.

    • Format and metadata inspection: Looks at inconsistencies or tampering in EXIF data, file structures, or container formats.

    • Scene and physics analysis: Studies impossible reflections, strange limb positions, or inconsistent motion dynamics.

    • Contextual investigation: Considers who created the content, when and where it was posted, and how it spread.

  2. No single method is sufficient: AI-based detectors can be fooled by adversarial techniques or novel generation methods. Deepfake forensics combines human expertise with multiple tools to build a stronger evidentiary base.

  3. Legal and ethical framework: The goal isn’t just detection but supporting decisions in court or investigations. Therefore, reproducibility, explainability, and chain of custody are essential.

  4. Dynamic and adversarial field: As deepfake techniques evolve, so do evasion tactics. Forensics must adapt rapidly and often requires staying ahead of the deepfake generation arms race.

In short, deepfake forensics is to synthetic media what document forensics is to forged paperwork—not just a detector, but a comprehensive investigative discipline.


Deepfake Forensics in Amped Authenticate

Amped Authenticate has been on the market since 2013, well before the term “deepfake” existed. We have more than 50 different filters and tools dedicated to photo and video forensics, many of which have been battle-tested through the years. Two of these filters are based on AI and devoted explicitly to deepfake detection: Diffusion Model Deepfake and Face GAN Deepfake.

With Amped Authenticate, we can analyze all aspects of the image and video, including metadata and file structure. The software allows us to highlight critical visual artifacts with annotations, spot inconsistencies with shadows and reflections (we recently presented a study on this topic at a scientific conference). Additionally, we can perform reverse image search, spot AI artifacts in the frequency domain, and much more. We showed these and many more in this blog post.

Let’s see an example. We used a selfie with two people. On the subject on the right, we superimposed a face generated by thispersondoesnotexist.com and adapted it to somewhat fit the photo. As part of the process, we had to resize, rotate, and slightly process the face and the original neck to make it visually more credible. Amped Authenticate’s Face GAN Detection filter accurately identified the face on the right as AI-generated, with 84% confidence.

However, this single piece of evidence wouldn’t be sufficient to confidently testify in court that the face is not real. What I can do, though, is use this initial finding as a starting point for further analysis.

Screenshot from Amped Authenticate software showing deepfake detection analysis of a street selfie. Two individuals are identified in the image: the man on the right is labeled as "GAN (0.84)," indicating a likely AI-generated face, while the woman on the left is marked "Not GAN (1.00)," confirming authenticity. The interface displays Face GAN Deepfake detection under the Medium-Overlay filter, with file metadata and analysis tools visible in the side panel.

Luckily, in Amped Authenticate, we have many filters that can help us verify the analysis. In this case, the ADJPEG (Aligned Double JPEG) filter clearly highlights that the face on the right has a compression history completely different than the rest of the image, confirming the results of the previous filter. ADJPEG uses a well-understood algorithm published in the literature. Unless we are able to find some other technical reason why the face on the right is so different than the one on the left, we could be confident to go to court and testify on it.

Screenshot of Amped Authenticate software displaying an EM DCT Map analysis on a suspected deepfake image. The map shows a highlighted red region—likely manipulated—surrounded by green areas indicating unaltered content. The analysis uses the EM-1-12 mode from the JPEG DCT filter set, aiding digital image forensics by detecting compression inconsistencies and possible tampering.

Let’s look at the image’s metadata. We can find traces of the software GIMP that was used to process the image and other technical features (such as the non-standard JPEG Huffman Tables). These are very unlikely to be found on a photo coming straight out of a digital camera.

Screenshot from Amped Authenticate software showing detailed JPEG file metadata under the “File Format” tab. The analysis highlights critical forensic flags, including missing thumbnail, use of GIMP 3.0.4 editing software, and inconsistent EXIF dates. The image’s compression signature is absent, no chroma subsampling is detected, and non-standard Huffman tables are noted, indicating potential manipulation.

The DCT Plot filter reveals typical signs that the image has been resaved, too, further confirming its lack of originality.

Screenshot of Amped Authenticate displaying a DCT Plot under the “Intensity-Quantized-0” setting for JPEG analysis. The bar graph shows quantized DCT coefficients intensity distribution, helping detect signs of image manipulation or recompression. The interface includes project files, filters, and DCT domain settings for forensic image authenticity evaluation.

This is a quick example of how essential it is to perform a complete analysis on an image or video. In other cases, we could have done reverse image search, frequency spectrum analysis, or shadows and reflection analysis.

A Wake-up Call

For the past few years, I’ve worked to give policymakers and all criminal justice stakeholders some foundations of video evidence literacy. Now, with deepfakes exploding, the increasing use of AI in casework, and widespread misinformation, it’s important to reiterate our clear commitment: we are dedicated to forensics. Our findings carry legal weight and directly affect people’s lives and freedom. In our field, relying on deepfake detection alone is a dangerous illusion. What we, and justice, truly need is deepfake forensics.


 Martino Jerian

Martino Jerian is the CEO and Founder of Amped Software. He holds a degree in Electronic Engineering (summa cum laude) from the University of Trieste, Italy, where his thesis focused on forensic image processing. In 2008, he founded Amped Software, leading the development of advanced tools for image and video forensics. With a strong background in software engineering, he played a key role in designing and driving the initial development of the company’s products. Martino has been a contract professor in university courses on investigations, forensics, and intelligence. He has authored multiple scientific papers in the field of image and video forensics and has served as a forensic expert in high-profile judicial cases. His work bridges the gap between cutting-edge technology and the pursuit of security and justice.

Subscribe to our Blog

Receive an email notification when a new blog post is published and don’t miss out on our latest updates, how-tos, case studies and much more content!