Category Archives: Forensic Workflow

CCTV – The Beginners Guide

women looking at cctv cameras

In this post on CCTV Acquisition, we will provide a base for the series by breaking down CCTV into a relatively easy chunk. Understanding CCTV could be a series in itself. Nonetheless, we feel it’s essential to understand how CCTV works before getting into the weeds of recovering it. 

CCTV stands for Closed Circuit TeleVision and derives from the time before computer networks or video streaming. The ‘Closed Circuit’ comes from the fact that the video signal could be viewed and recorded. However, there was no method to transmit or share it outside of the cabled infrastructure. 

Continue reading

Introduction to CCTV Acquisition

introduction to cctv acquisition

Welcome to this new Amped Software blog series on CCTV Acquisition. In this fortnightly series, we hope to break down some misconceptions and challenges, but also provide some solutions for the initial recovery of video evidence from surveillance systems. 

Make sure to stay up-to-date with our blog by checking in regularly, as we will be posting a new article every two weeks. You won’t want to miss out on any of the content!

Continue reading

Dealing with Deepfakes

“SEEING IS BELIEVING.” Or, rather, that’s what we used to say. Since the beginning of time, seeing a fact or a piece of news depicted in an image was far more compelling than reading it, let alone hearing about it from someone else. This power of visual content probably stemmed from its immediacy: looking at a picture takes less effort and training than reading text, or even listening to words. Then, the advent of photography brought an additional flavor of undisputable objectivity. Thanks to photography, pictures could be used as a reliable recording of events. Looking closer, however, it turns out that photographs have been faked since shortly after their invention. One of the most famous examples of historical hoaxes, dating back to the late 1860s, is Abraham Lincoln’s head spliced over John Calhoun’s body, and cleverly so. (Note: Click here to read the full hoax description on hoaxes.org.)


Politics was indeed an important driver for image manipulation throughout the years, as witnessed by many fake pictures created to serve leaders of democracies and tyrannies. We have photos of the Italian dictator, Benito Mussolini, proudly sitting on a horse that was held by an ostler (the latter promptly erased), photos of Joseph Stalin where some subjects were removed after they fell in disgrace, and so on. All these pictures were “fake”, in the sense that they were not an accurate representation of what they purported to show.

Of course, creating hoaxes with good, old-fashioned analog pictures was not something everyone could do. It took proper tools, training, and lots of time. Then, digital photography arrived, which was soon followed by digital image manipulation software and, a few years later, digital image sharing platforms. With advanced image editing solutions available at affordable prices—or even for free—there was a boom in the possibilities of creating fake pictures. Of course, you still needed suitable training and time to obtain professional results, but this was nothing compared to working with film.

In the last couple of years, we have witnessed a revolution in the manipulation of images: “deepfakes”. A deepfake is a fake image or video generated with the aid of a deep artificial neural network. It may involve changing a person’s face with someone else’s face (so-called “face-swaps”), changing what a subject is saying (“lip-sync” fakes), or even changing the words and movements of someone’s head so that they are like a puppet, or guided actor (“re-enactment”). But how is this achieved? What are these “deep artificial neural Networks”? How can we fight deepfakes?

In this article, published in the Evidence Technology Magazine, we’ll try to address these questions and bring some order to all of this.

Handle With Care: Edit Project Files With a Text Editor

Dear friends welcome to this week’s tip! Today we’ll talk about something that is more of a philosophy than a feature, and as such, you’ll find it reflected in all Amped products. We’re talking about the way Amped solutions deal with export formats and project files. We’ll show you how compatible our export formats are and how readable (and… editable!) our project files are, so… keep reading!

Continue reading

Is PRNU Camera Identification Still Reliable? Tests on Modern Smartphones Show We May Need a New Strategy!

Dear Amped friends, today we’re sharing with you something big. If you’ve been following us, then you know that Amped invests lots of resources into research and testing. We also join forces with several universities to be on the cutting edge of image and video forensics. During one of these research ventures with the University of Florence (Italy), we discovered something important regarding PRNU-based source camera identification.

PRNU-based source camera identification has been, for years, considered one of the most reliable image forensics technologies: given a suitable number of images from a camera, you can use them to estimate the sensor’s characteristic noise (we call it Camera Reference Pattern, CRP). Then, you can compare the CRP against a questioned image to understand whether it was captured by that specific exemplar. You can read more about PRNU here.

Since its beginnings, the real strength of PRNU-based source camera identification was that false positives were extremely rare, as shown in widely acknowledged scientific papers. The uniqueness of the sensor fingerprint was so strong that researchers were even able to cluster images based on their source device, comparing the residual noise extracted from single images, in a one-vs-one fashion. We tested this one-vs-one approach over the VISION dataset, which is composed of images captured with 35 portable devices (released roughly between 2010 and 2015), and actually, it worked. Take a look at the boxplot below. On the X-axis you have the 35 different devices in the VISION dataset (click here to see the list). For each device, the vertical green box shows the PCE values obtained comparing couples of images captured by the device itself (the thick box covers values from the 25th to the 75th percentiles, the circled black dot is the median value, isolated circles are “outlier” values). Red boxes and circles represent the PCE values obtained comparing images of the device against images of other devices. As expected, for most devices the green boxes lay well above the dashed horizontal line sitting on 60, which is the PCE threshold commonly used to claim a positive match. Most noticeably, we have no red circles staying well above the PCE threshold: yes, there are some here and there sporadically, but they’re still at values below 100, so we can call these “weak false positives”.

But with all the computations that happen inside modern devices, is PRNU still equally reliable? To answer this question, we’ve been downloading thousands of images from the web, filtering them so to take only pictures captured with recent (2019+) smartphones. We also filtered out images having traces of editing software in their metadata, and we applied several heuristic rules to exclude images that seemed to be not camera originals. For some devices, we also collected images at two of the default resolutions. We then grouped images by uploading users, assuming that different users take pictures with different exemplars and that a single user only owns one exemplar. Now, take a look at what happened when we tested Samsung smartphones.

Continue reading

Yes, It Makes the Difference! A Practical Guide to Why You Should Keep Your Amped Products Up To Date

Dear Amped friends, welcome to this week’s tip! Another release of Amped FIVE has just been published, and we want to take this opportunity to make a special Tip Tuesday: with some examples, we’ll show you why you should keep your Amped products updated! We’ll do it in a comparative fashion, showing the best you could achieve with an older version vs what you can get with the latest releases. Keep reading, you’ll have fun!

Continue reading

Try This at Home! Validation Is Important: Use These Datasets to Test Amped Solutions

Dear friends, welcome to this Tip Tuesday! It’s a rather anomalous one, indeed, since we’ll not show any trick about using a specific Amped solution as usual. But we’ll still provide you with a very good tip: we’ll guide you through some of the freely available datasets you can find online, that you can use to test and validate our Amped FIVE, Amped Authenticate, Amped Replay, and Amped DVRConv. Yes, you’ve read correctly! There is lots of data out there that you can use to make your own experiments and increase your confidence in our software reliability. Keep reading to find out.

Continue reading

Getting the Result

As a Certified Forensic Video Analyst, one of the hardest calls is stating that nothing can be done. I cannot recover that face, that logo, or that license plate.

I have written many articles, and spoken at conferences, about the challenges with CCTV video evidence, so getting a result from poor footage can be immensely satisfying.

So, what is required then to get the result?

The planets of Evidence, Tool and Competency all need to be aligned.

Continue reading