Dear friends, ready for this week’s Video Evidence Pitfall? Today we’re talking about infrared (IR) images, how they could be misleading, and some potential IR-related issues that may involve even “normal” videos. Keep reading!
Issue: you can’t trust any color in infrared images
How many times have you seen this kind of “washed out” image?
These are typical infrared images, and, as we will see, you should not use them to get any color information. To convince you, let me show you a couple of pictures. On the left, you have the visible light version of a shirt, on the right the IR version of the same shirt in the same scene and same everything.
Dear friends, I’m so glad to introduce this new blog series! Every Tuesday, for several weeks, we’ll walk together to discover some delicate or even dangerous aspects that you may easily encounter when dealing with images and videos during investigations. And of course, we’re not belittling your skills when we write “because you don’t know what you don’t know”! It’s just something that comes from our experience: we talk with investigators on a daily basis, and we’ve noticed that, sometimes, there’s a tendency to treat images and videos as “something everyone knows about”. All in all, we have them on our smartphones, we share them on social media, perhaps we also edit them with some consumer app or software from time to time, and with nice results.
Alas, my friends, it’s not that simple, for many reasons:
Videos you deal with in forensics often come from CCTV surveillance systems. The acquisition and processing lifecycle of such videos is very different than what goes on in a smartphone. Smartphones have never dealt with analog video, while many CCTV systems still work with analog cameras connected to a DVR. And what about compression? One minute of video on my Google Pixel 3a is worth hundreds of megabytes, while it would probably be <10 MBs in a CCTV system.
Video encoding and playback is a complex topic, and this is especially true when it comes to proprietary video formats, that are used by most surveillance systems. Those working in the field know that most of the time, the original video extracted from a DVR just won’t play in standard computer players, it is normal. How do we “convert” it to a playable video? There’s a whole world inside, and investigators must at least know that such a world… exists!
Remember that a shallow interpretation could steer a whole investigation in the wrong direction. Want an example? Take a look at this infrared picture of a shirt.
Dear friends welcome to this week’s tip! Today we’ll talk about something that is more of a philosophy than a feature, and as such, you’ll find it reflected in all Amped products. We’re talking about the way Amped solutions deal with export formats and project files. We’ll show you how compatible our export formats are and how readable (and… editable!) our project files are, so… keep reading!
Dear Amped friends, today we’re sharing with you something big. If you’ve been following us, then you know that Amped invests lots of resources into research and testing. We also join forces with several universities to be on the cutting edge of image and video forensics. During one of these research ventures with the University of Florence (Italy), we discovered something important regarding PRNU-based source camera identification.
PRNU-based source camera identification has been, for years, considered one of the most reliable image forensics technologies: given a suitable number of images from a camera, you can use them to estimate the sensor’s characteristic noise (we call it Camera Reference Pattern, CRP). Then, you can compare the CRP against a questioned image to understand whether it was captured by that specific exemplar. You can read more about PRNU here.
Since its beginnings, the real strength of PRNU-based source camera identification was that false positives were extremely rare, as shown in widely acknowledged scientific papers. The uniqueness of the sensor fingerprint was so strong that researchers were even able to cluster images based on their source device, comparing the residual noise extracted from single images, in a one-vs-one fashion. We tested this one-vs-one approach over the VISION dataset, which is composed of images captured with 35 portable devices (released roughly between 2010 and 2015), and actually, it worked. Take a look at the boxplot below. On the X-axis you have the 35 different devices in the VISION dataset (click here to see the list). For each device, the vertical green box shows the PCE values obtained comparing couples of images captured by the device itself (the thick box covers values from the 25th to the 75th percentiles, the circled black dot is the median value, isolated circles are “outlier” values). Red boxes and circles represent the PCE values obtained comparing images of the device against images of other devices. As expected, for most devices the green boxes lay well above the dashed horizontal line sitting on 60, which is the PCE threshold commonly used to claim a positive match. Most noticeably, we have no red circles staying well above the PCE threshold: yes, there are some here and there sporadically, but they’re still at values below 100, so we can call these “weak false positives”.
But with all the computations that happen inside modern devices, is PRNU still equally reliable? To answer this question, we’ve been downloading thousands of images from the web, filtering them so to take only pictures captured with recent (2019+) smartphones. We also filtered out images having traces of editing software in their metadata, and we applied several heuristic rules to exclude images that seemed to be not camera originals. For some devices, we also collected images at two of the default resolutions. We then grouped images by uploading users, assuming that different users take pictures with different exemplars and that a single user only owns one exemplar. Now, take a look at what happened when we tested Samsung smartphones.
Dear Amped friends, welcome to this week’s tip! Another release of Amped FIVE has just been published, and we want to take this opportunity to make a special Tip Tuesday: with some examples, we’ll show you why you should keep your Amped products updated! We’ll do it in a comparative fashion, showing the best you could achieve with an older version vs what you can get with the latest releases. Keep reading, you’ll have fun!
Dear friends, welcome to this Tip Tuesday! It’s a rather anomalous one, indeed, since we’ll not show any trick about using a specific Amped solution as usual. But we’ll still provide you with a very good tip: we’ll guide you through some of the freely available datasets you can find online, that you can use to test and validate our Amped FIVE, Amped Authenticate, Amped Replay, and Amped DVRConv. Yes, you’ve read correctly! There is lots of data out there that you can use to make your own experiments and increase your confidence in our software reliability. Keep reading to find out.
On Tuesday, May 22, I will be in Providence (RI, USA) at the Annual IACP Technology Conference to present a lecture. The topic, “Proprietary Video Files— The Science of Processing the Digital Crime Scene” is rather timely. Many years ago, the US Federal Government responded to the NAS Report with the creation of the Organization of Scientific Area Committees for Forensic Science (OSAC). I happen to be a founding member of that group and currently serve as the Video Task Group chairperson within the Video / Imaging Technology and Analysis Subcommittee (VITAL). If one was to attempt to distill the reason for the creation of the OSAC and its on-going mission, it would be this: we were horrible at science, let’s fix that.
Since the founding of the OSAC, each Subcommittee has been busy collecting guidelines and best practices documents, refining them, and moving them to a “standards publishing body.” For Forensic Multimedia Analysis, that standards publishing body is the ASTM. The difference between a guideline / best practice and a standard is that the former tend towards generic helpful hints whilst the latter are specific and enforceable must do’s. In an accredited laboratory, if there is a standard practice for your discipline you must follow it. In your testimonial experience, you may be asked about the existence of standards and if your work conforms to them. As an example, in section 4 of ASTM 2825-12, it notes the requirement that your reporting of your work should act as a sort of recipe such that another analyst can reproduce your work. Whether used as bench notes, or included within your formal report, the reporting in Amped FIVE fully complies with this guidance. There is a standard out there, and we follow it.
There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.
There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.
There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.
What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:
To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
He will use whatever result “looks good” without questioning its validity or reliability.
If asked to document and repeat his work in detail he’ll be in deep trouble.
If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
When sending around results to colleagues he just pastes processed snapshots into Word documents.
When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.
A crime occurs and is “witnessed” by a digital CCTV system. The files that your investigation wants/needs are in the system’s recording device (DVR). What do you do to get them? Do you seize the entire DVR as evidence (“bag and tag”)? Do you try to access the recorder through its user interface and download/export/save the files to USB stick/drive or other removable media?
Answer: it depends.
There are times when you’d want to seize the DVR. Perhaps 5% of cases will present a situation where having the DVR in the lab is necessary:
Arsons/fires can turn a DVR into a bunch of melted down parts. You’re obviously not going to power up a melted DVR.
An analysis that tests how the DVR performs and creates files. For example, does the frame timing represent the actual elapsed time or how the DVR fit that time into its container? Such tests of reliability will require access to the DVR throughout the legal process.
Content analysis questions where there’s a difference of opinion between object/artifact. For example, is it a white sticker on the back of a car or an artifact of compression (random bit of noise)?
If you’re taking a DVR from a location, you can follow the guidance of the computer forensics world on handling the DVR (which is a computer) and properly removing it from the scene.