Category Archives: Forensic Workflow

Handle With Care: Edit Project Files With a Text Editor

Dear friends welcome to this week’s tip! Today we’ll talk about something that is more of a philosophy than a feature, and as such, you’ll find it reflected in all Amped products. We’re talking about the way Amped solutions deal with export formats and project files. We’ll show you how compatible our export formats are and how readable (and… editable!) our project files are, so… keep reading!

Continue reading

Is PRNU Camera Identification Still Reliable? Tests on Modern Smartphones Show We May Need a New Strategy!

Dear Amped friends, today we’re sharing with you something big. If you’ve been following us, then you know that Amped invests lots of resources into research and testing. We also join forces with several universities to be on the cutting edge of image and video forensics. During one of these research ventures with the University of Florence (Italy), we discovered something important regarding PRNU-based source camera identification.

PRNU-based source camera identification has been, for years, considered one of the most reliable image forensics technologies: given a suitable number of images from a camera, you can use them to estimate the sensor’s characteristic noise (we call it Camera Reference Pattern, CRP). Then, you can compare the CRP against a questioned image to understand whether it was captured by that specific exemplar. You can read more about PRNU here.

Since its beginnings, the real strength of PRNU-based source camera identification was that false positives were extremely rare, as shown in widely acknowledged scientific papers. The uniqueness of the sensor fingerprint was so strong that researchers were even able to cluster images based on their source device, comparing the residual noise extracted from single images, in a one-vs-one fashion. We tested this one-vs-one approach over the VISION dataset, which is composed of images captured with 35 portable devices (released roughly between 2010 and 2015), and actually, it worked. Take a look at the boxplot below. On the X-axis you have the 35 different devices in the VISION dataset (click here to see the list). For each device, the vertical green box shows the PCE values obtained comparing couples of images captured by the device itself (the thick box covers values from the 25th to the 75th percentiles, the circled black dot is the median value, isolated circles are “outlier” values). Red boxes and circles represent the PCE values obtained comparing images of the device against images of other devices. As expected, for most devices the green boxes lay well above the dashed horizontal line sitting on 60, which is the PCE threshold commonly used to claim a positive match. Most noticeably, we have no red circles staying well above the PCE threshold: yes, there are some here and there sporadically, but they’re still at values below 100, so we can call these “weak false positives”.

But with all the computations that happen inside modern devices, is PRNU still equally reliable? To answer this question, we’ve been downloading thousands of images from the web, filtering them so to take only pictures captured with recent (2019+) smartphones. We also filtered out images having traces of editing software in their metadata, and we applied several heuristic rules to exclude images that seemed to be not camera originals. For some devices, we also collected images at two of the default resolutions. We then grouped images by uploading users, assuming that different users take pictures with different exemplars and that a single user only owns one exemplar. Now, take a look at what happened when we tested Samsung smartphones.

Continue reading

Yes, It Makes the Difference! A Practical Guide to Why You Should Keep Your Amped Products Up To Date

Dear Amped friends, welcome to this week’s tip! Another release of Amped FIVE has just been published, and we want to take this opportunity to make a special Tip Tuesday: with some examples, we’ll show you why you should keep your Amped products updated! We’ll do it in a comparative fashion, showing the best you could achieve with an older version vs what you can get with the latest releases. Keep reading, you’ll have fun!

Continue reading

Try This at Home! Validation Is Important: Use These Datasets to Test Amped Solutions

Dear friends, welcome to this Tip Tuesday! It’s a rather anomalous one, indeed, since we’ll not show any trick about using a specific Amped solution as usual. But we’ll still provide you with a very good tip: we’ll guide you through some of the freely available datasets you can find online, that you can use to test and validate our Amped FIVE, Amped Authenticate, Amped Replay, and Amped DVRConv. Yes, you’ve read correctly! There is lots of data out there that you can use to make your own experiments and increase your confidence in our software reliability. Keep reading to find out.

Continue reading

Getting the Result

As a Certified Forensic Video Analyst, one of the hardest calls is stating that nothing can be done. I cannot recover that face, that logo, or that license plate.

I have written many articles, and spoken at conferences, about the challenges with CCTV video evidence, so getting a result from poor footage can be immensely satisfying.

So, what is required then to get the result?

The planets of Evidence, Tool and Competency all need to be aligned.

Continue reading

Working Scientifically?

On Tuesday, May 22, I will be in Providence (RI, USA) at the Annual IACP Technology Conference to present a lecture. The topic, “Proprietary Video Files— The Science of Processing the Digital Crime Scene” is rather timely. Many years ago,  the US Federal Government responded to the NAS Report with the creation of the Organization of Scientific Area Committees for Forensic Science (OSAC). I happen to be a founding member of that group and currently serve as the Video Task Group chairperson within the Video / Imaging Technology and Analysis Subcommittee (VITAL). If one was to attempt to distill the reason for the creation of the OSAC and its on-going mission, it would be this: we were horrible at science, let’s fix that.

Since the founding of the OSAC, each Subcommittee has been busy collecting guidelines and best practices documents, refining them, and moving them to a “standards publishing body.” For Forensic Multimedia Analysis, that standards publishing body is the ASTM. The difference between a guideline / best practice and a standard is that the former tend towards generic helpful hints whilst the latter are specific and enforceable must do’s. In an accredited laboratory, if there is a standard practice for your discipline you must follow it. In your testimonial experience, you may be asked about the existence of standards and if your work conforms to them. As an example, in section 4 of ASTM 2825-12, it notes the requirement that your reporting of your work should act as a sort of recipe such that another analyst can reproduce your work. Whether used as bench notes, or included within your formal report, the reporting in Amped FIVE fully complies with this guidance. There is a standard out there, and we follow it.

Continue reading

Cowboys versus Bureaucrats: Attitude and Tools

There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.

There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.

There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.

What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:

  • To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
  • Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
  • He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
  • He will use whatever result “looks good” without questioning its validity or reliability.
  • If asked to document and repeat his work in detail he’ll be in deep trouble.
  • If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
  • When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
  • When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
  • When sending around results to colleagues he just pastes processed snapshots into Word documents.
  • When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.

Continue reading

To seize or to retrieve: that is the question

A crime occurs and is “witnessed” by a digital CCTV system. The files that your investigation wants/needs are in the system’s recording device (DVR). What do you do to get them? Do you seize the entire DVR as evidence (“bag and tag”)? Do you try to access the recorder through its user interface and download/export/save the files to USB stick/drive or other removable media?

Answer: it depends.

There are times when you’d want to seize the DVR. Perhaps 5% of cases will present a situation where having the DVR in the lab is necessary:

  • Arsons/fires can turn a DVR into a bunch of melted down parts. You’re obviously not going to power up a melted DVR.
  • An analysis that tests how the DVR performs and creates files. For example, does the frame timing represent the actual elapsed time or how the DVR fit that time into its container? Such tests of reliability will require access to the DVR throughout the legal process.
  • Content analysis questions where there’s a difference of opinion between object/artifact. For example, is it a white sticker on the back of a car or an artifact of compression (random bit of noise)?

If you’re taking a DVR from a location, you can follow the guidance of the computer forensics world on handling the DVR (which is a computer) and properly removing it from the scene.

Continue reading

What monitor to use?

It’s a common question during training – “What Monitor to use?”

One of the many reasons why people start using software like Amped FIVE is that it installs and runs on any modern Windows PC. There is no need to have huge amounts of hardware or specific configurations. A good, stable setup will work perfectly well.

One of the key purchasing decisions though, when updating or designing a new workstation is the monitor. Some of you may remember that I briefly mentioned monitors in last year’s Advent Calendar: useful tips and tricks in Amped FIVE blog post.

Continue reading

Learning & Development

I’m at Schiphol again!

For those unaware, this is the airport in Amsterdam, The Netherlands.

I’m often here as I use this airport as a layover for many international flights when I can’t get one from my local airport in the UK.

This time though I have stayed here, in the Netherlands, delivering more Amped FIVE training.

It’s an easy airport to find a quiet spot to type!

I haven’t been able to blog as much as I would like over the past few months as I have been running many different training sessions and workshops. During these, I have noticed an emerging trend but never made the connection until this week.

To lay the foundations for this subject, let’s look at how a large law enforcement agency or a country made up with smaller agencies are commonly organised. I know there are many, many configurations but you should get the picture!!

First, we have a Regional Police Force.

Continue reading