Advanced diffusion model deepfake detection capabilities are coming into Amped Authenticate with this latest update, plus new tools and features for the Video mode!
Dear friends, we’re excited to announce another massive update to Amped Authenticate! We’re adding a new filter for detecting deepfake images created with diffusion models, specifically Midjourney, Dall-E, and Stable Diffusion. We’re empowering the filters of the Video Mode to show plots and frame overlays simultaneously, allowing for the creation of compelling reports. And much more! Let’s jump into the details.
See the new features in action!
Introducing the New Deepfake Detection Category
Deepfakes are a rising concern in the world of media and forensics. AI-based image generation and tampering technologies are evolving astonishingly and are widely and easily accessible to anyone. Amped has been investing a lot of work and research to provide Authenticate users with solutions to face this threat.
We began by releasing the Face GAN Deepfake filter over a year ago to combat the proliferation of fake facial images created by services such as thispersondoesnotexist.com, which can be used to create fake profiles on social media, open bank accounts, etc. However, we always made it very clear (starting from the name) that the Face GAN Deepfake filter only deals with GAN-generated faces.
We’re now taking a step forward with the new Diffusion Model Deepfake filter, and so we found it appropriate to create a dedicated category called Deepfake Detection. From now on, this category will also host the Face GAN Deepfake filter.
The New Diffusion Model Deepfake Filter
Diffusion models are at the core of the most modern AI-based synthetic image generation systems such as Midjourney, Dall-E, Stable Diffusion, etc. These systems are also known as “text-to-image” since they will create an image starting from a textual prompt. For example, you write “an image of a car in a street during a sun day, real image” and you get this. Okay, the license plate is perhaps a bit odd, but the general quality is impressive.
We’ve thus accurately monitored the state of the art, looking for methods for detecting this kind of imagery. We’ve eventually identified the research conducted by Cozzolino et al. to be highly interesting. The method works by extracting so-called CLIP (Contrastive Language-Image Pre-training) features from the image. It then uses a simple Support Vector Machine classifier to assign the input image to one of several classes. In the coming weeks we’ll dedicate an article to explaining how this filter works. The Diffusion Model Deepfake filter brings this technology to your lab.
We trained the system with a variety of real images and pictures from the most common text-to-image methods: Midjourney, Stable Diffusion, and Dall-E. When you process an image with the filter, you’ll get a tabular output showing the confidence score assigned by the classifier to each class. For your convenience, the class with the largest score is always reported in the “Predicted Class” row.
The filter implements the classical Authenticate filter warning system. When the predicted class is one of the diffusion models, Diffusion Model Deepfake will turn red in the filter panel.
As usual with detection filters based on machine learning, it is important to know that:
- The classifier can get it wrong. Although we did our best to train the classifier on a wide range of data, including data augmentation based on image post-processing, a given image may be misclassified.
- A high confidence score does not guarantee that the predicted class is correct. Unfortunately, a classifier can sometimes be sure about something and still be wrong.
Yes, we are perfectly aware, and we want you to know that every solution has limitations, including this one. However, the experimental validation reported in the original paper and our internal validation confirmed that this method performs very promisingly.
We’re also aware that commercial text-to-image services are updated frequently. For this reason, we’ve set up an internal process to keep updating the Diffusion Model Deepfake filter in future Authenticate updates. So yes, that’s another good reason to keep your SMS service running! 😉
Before we move on, one important thing! If you’ve updated Authenticate from a previous version, you’ll likely find the new Diffusion Model Deepfake filter disabled by default. Simply right-click on it, enable it, and save the settings with the dedicated button on top of the Filter’s panel.
New Tools for Authenticate’s Video Mode
Starting from this update, you’ll find more elements in the Tools menu of the Video Mode!
If you’re familiar with Authenticate, you’ll recognize these tools from the Image Mode. The Show Video Location on Google Maps will pick up GPS data from the video metadata (if available) and show the location on Google Maps using your default browser.
Needless to say, this can be extremely useful for authentication purposes. Similarly, clicking on Check Sun Position for Video Location and Date on SunCalc.org will open up SunCalc.org at the place and date reported in video metadata. In the example below, you can see how the Sun position is indeed consistent with the shadows seen in the image. Checking the Sun position is also an excellent way to complement a Shadow analysis available in Authenticate’s Image Mode. Remember, you can easily send a video frame to the Image Mode by clicking the dedicated button on the top ribbon.
Finally, the Search Current Frame Content on the Web will send the currently displayed frame to Google Lens to search for similar content on the Web. Since this implies sending possibly sensitive imagery online, you’ll be asked to give explicit permission before continuing.
Once the frame is uploaded, Google Lens will let you optionally crop the region of the frame to be considered.
Improvements to the Video Mode’s Filters
For several filters of Authenticate’s Video Mode, you’ll now be able to view the results of a filter simultaneously overlaid over video frames and in the plot! Let’s check how this feels and can be used for every involved filter.
Macroblocks
The Macroblocks filter visually displays the type of macroblocks and possibly the motion vectors over the video. You can also use it to inspect the quantization parameter employed for each block. With this update, you’ll find a new button in the Filter Parameters panel called Generate Plot. Just click on it, and you’ll also be presented with a rich plot that lets you inspect the global behavior of each property in time. We’ve added a button to trigger the plot generation. This is because creating the plot implies processing the entire video, which may take some time for longer videos. Instead, the overlay is nearly instantaneous
As you can see, the plot features many data series. We have grouped intra-predicted blocks into a single series called “Count of intra macroblocks”. We did the same for predicted and copied macroblocks.
Remember that the current series selection and the zoom level are stored when you save a bookmark. Thus, in the report, you’ll see exactly what was displayed when you bookmarked the filter.
Besides being useful for integrity verification, Macroblock analysis can also help quickly spot events in a static video. For instance, one could plot the magnitude of motion vectors or the count of intra-macroblocks, as shown in the sample below.
Block Difference
You’ll find the very same improvement also for the Block Difference filter. In this case, the plot depends on how you configure the filter. Therefore, every time you edit the parameters, the plot must be re-computed.
We’ve also added the ability to choose the color to use for representing unchanged blocks.
Setting the Block Difference’s Threshold parameter to a large value and then plotting is another very effective way to spot events in a surveillance video, as shown in the example below.
Channels
The Channels filter gets two improvements with this release! First, you can now select a specific channel of a color space, and that channel alone will be displayed in the viewer. Moreover, the plot will be now computed based on your color space selection, saving computation time.
Improvements shared by Image and Video Mode
Bookmarks
It may easily happen that you’ve worked on a bookmark for a while, adding comments and possibly annotations. Then, you realize that a slight tuning to the filter parameters would have made the result even more compelling.
Until today, you had to create a new bookmark and then transfer all the comments and annotations to that one. From this update on, you can right-click on the currently active bookmark and update it to the new settings, saving considerable time!
When you update the bookmark, if you didn’t change its default name, Authenticate will also automatically update the bookmark’s name with the new filter’s input parameter values. Instead, if you had previously customized the name, you’ll be prompted to enter the name for the updated bookmark.
Saving Screenshots to File and Clipboard
The screenshot button has become more powerful! By right-clicking on it, you can now choose what should be screenshotted and where the screenshot should be sent to: a user-defined filename, an automated filename, or simply copied to the clipboard, ready to be pasted to a text editor or any other program.
You can also set the default options from Authenticate’s Program Options.
Other Improvements
- Moved from EXE to MSI installer: Amped Authenticate is now delivered as an MSI installer. Since Authenticate is only delivered in 64-bit form, the default installation path is now C:\Program Files\Amped Authenticate.
- Annotate – Magnify: we added a checkbox for toggling Contrast/Brightness adjustments preview.
- Video Mode – GUI: Plot or Table viewer will automatically show up when a filter that uses them is applied.
- We’ve updated the Exiftool software to version 12.77.
- You can now load images and projects by dragging and dropping files wherever on the GUI.
- We added the ability to load images extracted from a PDF directly into Authenticate after the extraction process is completed.
Bugfixes
- GUI: fixed a bug that caused side effects on GUI while dragging items in the project or filters panel.
- GUI: fixed a bug that caused the software to freeze while doing extreme zooming-in on a plot.
- Video Mode – Project: fixed a bug causing a crash when deleting a bookmarks folder.
- Video Mode – Project: fixed a bug that caused bookmarks in the project panel to temporarily get a different name when renaming folders.
- Video Mode – GUI: fixed a bug causing Plots not to properly display for some language configuration.
- VPF: fixed a bug that zeroed the signal value for some frames.
- VPF: fixed a bug that caused processing to fail on videos with some odd resolutions.
- Histogram Equalization: fixed the behavior that caused mouse selection to merge with movement (panning) when zooming.
- Macroblocks: removed an unnecessary warning about missing parameters when a project containing the Macroblocks filter is reloaded.
Don’t Delay – Update Today
If you have an active support plan, you can update straight away by going into the menu Help>Check for Updates Online within Amped Authenticate. If you need to renew your SMS plan, please contact us or one of our authorized distributors. And remember that you can always manage your license and requests from the Amped Support portal.