Skip to main content

How to Quickly Find Manipulated Objects With Amped Authenticate’s Shadows Filter

Reading time: 4 min

find manipulated objects with amped authenticate's shadows filter

Dear Tip Tuesday addict, welcome to this week’s dose! Today we’re dealing with one of the latest features introduced in Amped Authenticate, the Shadows filter. We’ll see how the filter can help you identify the manipulated object, which is very handy when you’ve selected many shadows in the image. Keep reading!

Understanding the Shadows Filter in Amped Authenticate

If you’ve never tried Amped Authenticate’s Shadows filter, you’re definitely missing something! You’ll find it in the Geometrical Analysis category since it aims at detecting physical inconsistencies rather than anomalies at the pixel or statistical level.

The idea is quite simple, the user clicks over a point on a shadow. Then clicks on two more points in the image, which conservatively include the point which is casting that shadow, as shown below:

Detecting Tampering Through Light Source Inconsistencies

Under the assumption that there’s a single, point-wise light source (e.g. the sun, or a single light bulb or spotlight), the shadow projected by all objects can be used to constrain the position of the light source. Each time a selection is done, a wedge is defined which constrains the position of the light source on the image plane (of course, such position is not necessarily contained within the image). If all the available wedges share a common area (called “intersection”), then shadows are consistent. This is well explained in the image below, where the five green wedges intersect to form the yellow region, which actually contains the lamp.

Screenshot of Amped Authenticate software displaying a forensic shadow analysis of an indoor photograph titled "inside3a_front-201106152120.jpg." The image shows a room with a table and several objects, with green lines indicating cast shadow constraints converging toward a visible light source. The right panel lists five shadow constraints with coordinates and a system status marked as "System Feasible" in green, indicating consistency between object shadows and the identified light source. The interface also shows active tools such as "Color Channels," "Histogram Equalization," and "Shadow Constraints."

Now, what happens if the image contains a spliced object? If the object’s shadow has not been forged correctly, when we add the corresponding wedge, the system may become unfeasible. It means that there exists no intersection for all the wedges. This is actually what happens in the example below (taken from the great Realistic Tampering Dataset by P. Korus): notice that the system is now marked as unfeasible.

Screenshot of Amped Authenticate software analyzing a street-level image with cast shadow constraints. The scene shows multiple people, including dancers in red dresses, on a cobblestone path, with green shadow analysis lines converging from their feet. The interface highlights "System Unfeasible" in red at the bottom, indicating inconsistency among shadow directions. The right panel lists shadow constraint points and their coordinates, and the left panel shows various forensic analysis tools like EXIF, JPEG Ghosts Plot, and PRNU Identification.

Okay, so we know the above image is fake. But how do we identify the forged object? One possibility is to iteratively mark each constraint as “Inconsistent”, so that it gets excluded from the computation. To do so, simply right-click on it in the list and select Mark as inconsistent. The constraint will become red and it will be excluded from the intersection. If we guessed the right object, then the system becomes Feasible and we’re done. In the video below, we get the right constraint at the second attempt!

However, when you have many constraints, it may take a while to check all of them!

Using the Minimum Unfeasible Set to Pinpoint Fakes

And here comes the tip of the day! When the system is in an unfeasible state, a promising Show Min. Unfeasible Set button becomes available:

Close-up view of the shadow analysis panel in Amped Authenticate software, showing the system state as "System Unfeasible" in red. The "Show Min. Unfeasible Set" button is highlighted, with checkboxes for "Constraints Areas" and "Feasible Wedge" both selected. The "Total Slack" value is 175978, and the "Inverted" option is marked as "Yes."

By clicking on it, we’ll let Amped Authenticate automatically combine many possible subsets of constraints, looking for a smaller set of constraints which is enough to make the system unfeasible. This will restrict a lot our search for a fake object! Let’s see it in action:

As you can see, after clicking on the Show Min. Unfeasible Set button, only four constraints remain selected, including the one corresponding to the fake object. We can thus restrict our search to only those.

Notice that, if you go back to the complete set of constraints (by clicking on the Show All button) and then click again on Show Min. Unfeasible Set, you may obtain a different selection! Indeed, there is not a single minimum set of unfeasible constraints, and Authenticate will randomize the search to give you more chances of finding a small subset. Wait, don’t go away! We know the word “randomize” is not very welcomed in the forensic field! Let us defend our choice with two arguments:

  1. This function does not influence the final decision: it only provides a faster way to reach it.
  2. Although randomized, the behavior will be the same if you restart Amped Authenticate and work with the filter in the same way. So you can reproduce the full workflow, including the automatic minimum unfeasible set selection provided by the software.

Conclusion

That’s it for today! If you want to download the image we’ve worked on, it’s freely available (as part of a ZIP file) from the webpage presenting the dataset.


 Marco Fontani

Marco Fontani is the Forensics Director at Amped Software, a software company developing image and video forensic solutions for law enforcement agencies worldwide. He earned his MSc in Computer Engineering in 2010 and his Ph.D. in Information Engineering in 2014. His research focused on image watermarking and multimedia forensics. He participated in several research projects funded by the EU and EOARD, and authored/co-authored over 30 journal and conference proceedings papers. He has experience in delivering training to law enforcement and provided expert witness testimony on several forensic cases involving digital images and videos. He is a former member of the IEEE Information Forensics and Security Technical Committee, and he actively contributed to the development of ENFSI’s Best Practice Manual for Image Authentication.

Subscribe to our Blog

Receive an email notification when a new blog post is published and don’t miss out on our latest updates, how-tos, case studies and much more content!