Author Archives: Marco Fontani

How to Quickly Find Manipulated Objects With Amped Authenticate’s Shadows Filter

Dear Tip Tuesday addict, welcome to this week’s dose! Today we’re dealing with one of the latest features introduced in Amped Authenticate, the Shadows filter. We’ll see how the filter can help you identify the manipulated object, which is very handy when you’ve selected many shadows in the image. Keep reading!

Continue reading

Learn How to Lock Annotations and Prevent Accidental Modifications in Amped Replay and Amped FIVE

Dear friends, welcome to a brand new tip, which covers both Amped Replay and Amped FIVE! It’s the kind of tip people love: very quick and potentially a lifesaver! We’ll show how you can protect the annotations that you make so that you don’t accidentally edit or delete them. Keep reading!

Continue reading

Layered Perfection: Learn Different Ways of Adding Multiple Annotations With Amped FIVE

For a few months now, Amped FIVE features the new Annotate filter, which makes annotating your video so easy and effective at the same time. This tip is dedicated to the difference between adding multiple annotation objects using a single instance of the filter or adding one annotation per filter. It’s not the same! Curious? Keep reading!

Continue reading

Make Impressive Annotations in Seconds with Amped Replay’s Assisted Tracking

Dear Amped friends welcome to this week’s Tip! We hope you’ve already heard of the latest Amped Replay update, which rolled out a few weeks ago. One of the coolest new features is assisted tracking, which makes annotations in Amped Replay even more powerful and easy than they were before. This weeks’ tip comes directly from one of our developers: it helps to make assisted tracking work better in challenging situations, so don’t miss it and keep reading!

Continue reading

Handle With Care: Edit Project Files With a Text Editor

Dear friends welcome to this week’s tip! Today we’ll talk about something that is more of a philosophy than a feature, and as such, you’ll find it reflected in all Amped products. We’re talking about the way Amped solutions deal with export formats and project files. We’ll show you how compatible our export formats are and how readable (and… editable!) our project files are, so… keep reading!

Continue reading

How to Use Amped Authenticate to Reveal Traces of Former JPEG Compression in Seemingly Uncompressed Images

Dear Amped friends, welcome to one more tip! Following the recent mini-series about unveiling traces of double JPEG compression, today we’ll show how Amped Authenticate can reveal if a seemingly uncompressed image was actually JPEG compressed in its past. Keep reading to find out more!

Continue reading

Is PRNU Camera Identification Still Reliable? Tests on Modern Smartphones Show We May Need a New Strategy!

Dear Amped friends, today we’re sharing with you something big. If you’ve been following us, then you know that Amped invests lots of resources into research and testing. We also join forces with several universities to be on the cutting edge of image and video forensics. During one of these research ventures with the University of Florence (Italy), we discovered something important regarding PRNU-based source camera identification.

PRNU-based source camera identification has been, for years, considered one of the most reliable image forensics technologies: given a suitable number of images from a camera, you can use them to estimate the sensor’s characteristic noise (we call it Camera Reference Pattern, CRP). Then, you can compare the CRP against a questioned image to understand whether it was captured by that specific exemplar. You can read more about PRNU here.

Since its beginnings, the real strength of PRNU-based source camera identification was that false positives were extremely rare, as shown in widely acknowledged scientific papers. The uniqueness of the sensor fingerprint was so strong that researchers were even able to cluster images based on their source device, comparing the residual noise extracted from single images, in a one-vs-one fashion. We tested this one-vs-one approach over the VISION dataset, which is composed of images captured with 35 portable devices (released roughly between 2010 and 2015), and actually, it worked. Take a look at the boxplot below. On the X-axis you have the 35 different devices in the VISION dataset (click here to see the list). For each device, the vertical green box shows the PCE values obtained comparing couples of images captured by the device itself (the thick box covers values from the 25th to the 75th percentiles, the circled black dot is the median value, isolated circles are “outlier” values). Red boxes and circles represent the PCE values obtained comparing images of the device against images of other devices. As expected, for most devices the green boxes lay well above the dashed horizontal line sitting on 60, which is the PCE threshold commonly used to claim a positive match. Most noticeably, we have no red circles staying well above the PCE threshold: yes, there are some here and there sporadically, but they’re still at values below 100, so we can call these “weak false positives”.

But with all the computations that happen inside modern devices, is PRNU still equally reliable? To answer this question, we’ve been downloading thousands of images from the web, filtering them so to take only pictures captured with recent (2019+) smartphones. We also filtered out images having traces of editing software in their metadata, and we applied several heuristic rules to exclude images that seemed to be not camera originals. For some devices, we also collected images at two of the default resolutions. We then grouped images by uploading users, assuming that different users take pictures with different exemplars and that a single user only owns one exemplar. Now, take a look at what happened when we tested Samsung smartphones.

Continue reading