Skip to main content

Authenticate Update 39075: Deepfake Detection has a New Dress and is Now Available in the Smart Report, Improved GUI and Report, and More!

Reading time: 7 min

This fall’s Authenticate update brings great improvements to our deepfake detection features, with the ability to run deepfake detection on batches of images and from within the Smart Report!

Dear friends, here we are again, presenting a new update to Amped Authenticate! As usual, we’ve been inspired by the requests coming from you, our loved users. There’s a lot to show: 

  • we’ve improved the user interface,
  • we’ve simplified the Diffusion Model Deepfake filter and added a batch processing feature for it,
  • we’ve added a new Deepfake Detection category into the Smart Report tool,
  • and more!

A lot of news, so let’s dive in and check them all out!

See the New Features in Action!

Graphical User Interface Improvements

Dockable Panels for Image Mode

Our engineers improved the GUI consistency between Authenticate’s Image and Video Modes. You can now undock panels in the image mode, so as to organize the GUI as it suits your working habits better. If you get lost, you can always restore the default windows layout from the View menu.

Screenshot of Amped Authenticate software showing a DCT (Discrete Cosine Transform) quantization plot for forensic image analysis. The bar graph in the center displays intensity values with a comb-shaped pattern, suggesting multiple JPEG compression events. The left panel lists analysis tools such as EXIF, JPEG QT, and Correlation Plot, while the right ‘Project’ panel organizes results under categories like Integrity Analysis, Global Processing Analysis, and Local Tampering Analysis. A note below explains that uneven DCT bar distribution indicates the image has likely undergone more than one compression cycle, revealing possible editing or manipulation

For users who upgraded from a previous version: if undocking doesn’t work for you, simply click on View – Reset Windows Layout to fix the issue.

New Option for Loading Files in Visual Inspection

Until today, when you dragged a new file into Authenticate or loaded it from the Evidence panel, the currently selected filter would automatically be run on the file. This can be useful in some cases, but could waste time in others.

Starting with this release, you’ll be able to choose the default behavior! Head to the Program Options and decide whether newly loaded files should be automatically opened in the Visual Inspection filter, or in the currently selected filter. This applies to both the Image and Video Modes.

Screenshot of the Program Options window in Amped Authenticate showing the General Settings tab. The highlighted option, "Always open new files in Visual Inspection", is set to "Yes".

Please note that this option does not affect the behavior of the “next/previous file” buttons, as they are conceived to allow a quick comparison of the current filter’s results among images in the same folder.

Close-up screenshot of the toolbar in Amped Authenticate software, showing a loaded file named "Deepfake Detection Test\IMG_20200102_075428.jpg". The toolbar section highlighted in red features image navigation buttons, including first, previous, play, next, and last frame controls. The evidence image thumbnail appears on the left, with file details on the right indicating a 4032x3024 JPEG image with 95% quality.

Seeking a Specific Frame for Video Mode

The Video Mode lets you now seek a specific frame by typing its number in the Viewer and hitting Enter. This is very handy when you’re hunting for a specific frame in a long video.

Screenshot of Amped Authenticate Video Mode showing a night-time video scene with a large explosion over a cityscape. The interface displays the file path of the loaded video at the top, with filter options such as Visual Inspection and Compression Analysis in the left panel. The frame number ‘124’ is highlighted at the bottom, indicating the current frame being analyzed. On the right, visual inspection controls for highlights, midtones, and shadows are visible, along with an option to export the processed video.

Improvements to the Diffusion Model Deepfake Filter

Diffusion Models are the deep learning technology behind the most recent deepfake generators. They create images, and nowadays videos, that are very hard to distinguish from real content with the naked eye.

Authenticate’s Diffusion Model Deepfake filter is a Machine Learning (ML) based tool trained to distinguish natural images from synthetically generated images created with tools like Dall-E, Midjourney, Stable Diffusion, and Flux. Like every ML tool, it can be wrong sometimes, which is why we always recommend cross-checking its results with other types of analysis, e.g. looking for geometrical inconsistencies, visual artifacts, and so on.

A Brand New Dress

With this release, we’ve improved and simplified the way the Diffusion Model Deepfake filter provides its results. Instead of an output table showing compatibility scores for each of the inner classes, we’ve narrowed down the output to two scores:

  1. compatibility with a known diffusion model
  2. non-compatibility with a known diffusion model

This makes interpreting the output much easier.

Moreover, we’ve turned the output into a graphical representation. If the image is found to be compatible with a diffusion model, you’ll see a red border around it, along with the compatibility score displayed at the bottom.

Screenshot of Amped Authenticate software analyzing an image named "rome_protests.jpg" using the Diffusion Model Deepfake tool. The interface displays a protest scene with police officers and civilians, highlighted with a red border indicating analysis. The result below the image shows "Compatible with known AI model: 0.981" suggesting the software detected strong evidence of AI-generated or manipulated content. The left panel lists analysis tools such as Fusion Map, Noise Map, PRNU Tampering, and Deepfake Detection

Instead, when the image is classified as not compatible with a known AI model, Authenticate will draw a more neutral gray border around it. This indicates that the image’s trustworthiness cannot be determined solely from that result. The image could be fake, even if AI didn’t generate it, or it may have been created by an AI system that the filter could not detect.

Screenshot of Amped Authenticate software analyzing a landscape image titled "IMG_20200102_075428.jpg" using the Diffusion Model Deepfake tool. The interface displays a misty field with trees at sunrise under review. The analysis result at the bottom states "Not compatible with known AI model: 1.000", indicating the image is authentic and not generated by artificial intelligence. The left panel lists forensic tools including Fusion Map, PRNU Map, and Deepfake Detection options.

You can still access results in a tabular form by clicking on the dedicated button in the filter GUI for easier copy-pasting.

Screenshot of Amped Authenticate software showing the "Diffusion Model Deepfake Results" window for an image named "rome_protests.jpg". The interface displays a highlighted "Show Table Output" button and analysis results indicating compatibility with a known diffusion model at a value of 0.981, and non-compatibility at 0.000. The left panel lists forensic tools under Deepfake Detection, and the viewer window partially shows the analyzed protest scene bordered in red.

Batch Processing

We’ve also introduced the ability to run the Diffusion Model Deepfake filter on a folder of images.

Results are presented in a very convenient table from which you can right-click to load images for further analysis. As for any other table in Authenticate, you can right-click on it to export results to an HTML or TSV file.

Screenshot of Amped Authenticate software displaying the "Batch Diffusion Model Deepfake Analysis" table. The highlighted "Analyze All Images In Evidence Folder" button indicates a bulk deepfake detection process. The results table lists multiple images with analysis outcomes showing whether each is "compatible with a known diffusion model" or "not compatible".

Face GAN and Diffusion Model Deepfake Filters Now Available in Smart Report

A Quick Introduction to the Smart Report

Authenticate’s Smart Report is an excellent way to do an initial triage of your images. For those unfamiliar with it, the Smart Report analyzes the image’s file format and metadata, looking for suspicious signs of processing (e.g., absence of metadata, non-standard Huffman Tables, and more).

  • If all looks good, the image is marked with a green light as being “likely camera original”, keeping everything very quick. 
  • If, instead, at least one warning is raised, then a set of local analysis filters is run on the image. If a local analysis filter raises a warning, the image gets flagged with a red light as containing “traces of possible forgery”. 
  • Otherwise, it goes into the intermediate state, marked with a yellow light.

You can read more details about the Smart Report here.

Summary Table from Amped Authenticate showing the analysis results of 11 processed images. The report indicates: 5 images likely to be camera originals (green indicator), 1 image with suspicious metadata but no signs of forgery (yellow indicator), and 5 images with traces of possible forgery (red indicator).

Running Deepfake Detection in the Smart Report

With this release, we’re introducing the ability to run the Face GAN Deepfake and/or the Diffusion Model Deepfake filters as part of the Smart Report. This is an optional feature that you can toggle from the Smart Report configuration dialog.

Smart Report settings window in Amped Authenticate showing deepfake detection options. The dropdown menu under "Deepfake Detection" is highlighted and expanded, displaying four selectable modes: Disabled, Diffusion Models + Face GAN, Diffusion Models, and Face GAN. The selected option is "Diffusion Models". Other parameters include "Files To Process" set to "All Images In Evidence Folder" and "Output Type" set to "Single Report File".

You’ll find the results from the Deepfake Analysis filters in a dedicated section, below the Local Analysis filters results.

Deepfake Analysis report showing two side-by-side images of a woman facing her reflection. The left image, labeled "Diffusion Model Deepfake", is outlined in red and marked as "Compatible with known AI model: 0.995", indicating a likely AI-generated image. The right image, labeled "Face GAN Deepfake", displays a green banner reading "Not GAN (1.00)", suggesting no GAN-based manipulation detected. The top warning text in red states "Deepfake Analysis – WARNING!"

We remark that the Smart Report is to be considered a triage tool. It is obviously not meant to replace a thorough analysis of each image.

Input File Details in the Report

Providing details and hash values for input files is a must-have for every forensic report. Since the very first release of Authenticate projects, every bookmark in the report has always featured the hash value of the analyzed file.

However, for large projects involving many files, users may find it more convenient to group all hash values and more details about each input file at the top of the report. That’s exactly what we’ve introduced in this release! In the Generate Report dialog, both for Image and Video Modes, you can now enable the new option “Add input files hash and info”.

Generate Report settings window in Amped Authenticate showing report configuration options. The highlighted dropdown field "Add input files hash and info" is set to "Yes – MD5", indicating that file hash verification will be included in the report.

Of course, each input file will only be listed once, even if you have multiple bookmarks for it. This is what you’ll find in your report:

Input file details section from a digital forensic report showing three analyzed files with metadata. Each file entry lists the filename, MD5 hash code, file size in bytes, format type, and image dimensions in pixels. The files include one JPG image named "D19_I_nat_0023.jpg" and two PNG screenshots labeled "ScreenCapture1.PNG" and "ScreenCapture2.PNG", each with unique hash values and resolutions.

Improvements to the Macroblocks and Coding Tree Units Filters

We have considerably improved how you can display data in the Macroblocks and Coding Tree Units filters!

First of all, you can now customize the color of the subdivision lines and of each motion vector type, which helps a lot when working with very bright videos.

Screenshot of Amped Authenticate Video showing macroblock quantization parameter analysis on a video frame. The frame is divided into a pink color-coded grid displaying numerical quantization values for each macroblock. On the right, the "Filter Parameters – Macroblocks" panel is visible, with a red rectangle highlighting the color configuration options for quantization parameter visualization, with color mapping set to "Frame-based" and a color legend indicating low to high quantization values.

We’ve also improved the way the Quantization Parameter value is displayed. First of all, for larger coding tree units, the value will be displayed only once at the center instead of being repeated multiple times.

Moreover, when visualizing the Quantization Parameter as a color scale, a new Color Map parameter is available. By setting it to “Frame-based”, the color scale will be determined on a relative basis, according to the maximum and minimum quantization parameter values within each individual frame. By setting it to “Absolute”, the minimum and maximum values allowed by the codec will be used instead.

In the example below, you can see that even a subtle difference like the one between 25 and 19 becomes noticeable when the Color Map is set to Frame-based.

Screenshot of Amped Authenticate Video showing Coding Tree Units analysis with quantization parameter visualization. The video frame is divided into pink color-coded blocks, each labeled with numerical quantization values. A red rectangle highlights a specific area of blocks in the frame. On the right, the "Filter Parameters – Coding Tree Units" panel is open, with another red box emphasizing the "Show Quantization Parameter" option and "Color Map: Frame-based" dropdown setting.

Other Improvements and Bug Fixes

We’ve made other improvements to the software, including:

  • Annotate: added the ability to copy-paste the position of annotation objects.
  • Annotate: Edge Feathering now adjusts proportionally to the redaction area.
  • Improved the description of the Face GAN Deepfake, Diffusion Model Deepfake, and Fusion Map filters.
  • GUI: Fixed the appearance of losing colors during switching users due to Windows behavior.
  • Advanced File Info: fixed a bug causing incorrect value being of “PTS duration (computed)” for some peculiar video formats.
  • Uninstaller: the license seat can now be deactivated while uninstalling manually if the Digital license scheme is in use.
  • Video Mode: Advance File Info: added the ability to display the desired frame in the viewer via “Go to Frame”.
  • Video Mode: fixed a bug that caused the chroma upsampling option not to be applied correctly.
  • Added a warning message that notifies the user when some software components cannot be found or accessed.

Don’t Delay – Update Today

The new features we’ve added can significantly speed up your workflow and help you focus on the most suspicious images first.

If you have an active support plan, you can update straight away by going into the menu About > Check for Updates within Amped Authenticate. If you need to renew your SMS plan, please contact us or one of our authorized distributors. And remember that you can always manage your license and requests from the Amped Support Portal.


 Marco Fontani

Marco Fontani is the Forensics Director at Amped Software, a software company developing image and video forensic solutions for law enforcement agencies worldwide. He earned his MSc in Computer Engineering in 2010 and his Ph.D. in Information Engineering in 2014. His research focused on image watermarking and multimedia forensics. He participated in several research projects funded by the EU and EOARD, and authored/co-authored over 30 journal and conference proceedings papers. He has experience in delivering training to law enforcement and provided expert witness testimony on several forensic cases involving digital images and videos. He is a former member of the IEEE Information Forensics and Security Technical Committee, and he actively contributed to the development of ENFSI’s Best Practice Manual for Image Authentication.

Subscribe to our Blog

Receive an email notification when a new blog post is published and don’t miss out on our latest updates, how-tos, case studies and much more content!