Be Conscious of the Algorithms You Use in a Forensic Setting

Our customers often ask us for specific functions and filters to be included in our software. Paying attention to user requests and managing our development priorities according to them is probably one of the things Amped Software is best known for.

However, not all requests, even if technically feasible, are a match for the purpose of our solutions. Take for example Amped FIVE: some of the most common use cases are enhancing license plates or faces.

Quite a few customers asked for some functions to perform super resolution of faces from a single image. While this may be technically very interesting, most of the implementations have a fatal flaw that prevents them from being used for forensic applications: they introduce data external to the case.

Yesterday a GitHub project called “srez” caught my attention.

Image super-resolution through deep learning. This project uses deep learning to upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.

Here’s an random, non cherry-picked, example of what this network can do. From left to right, the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.
Example output

Look at the amazing quality of the third face in every row, created by the algorithm. It’s awesome. But let’s go one step back: how is the quality improved? It is based on training data from a large number of faces. And in a real case, these faces will very likely be different from the face you are trying to improve.

Look at the faces in the third and fourth columns: even with this small thumbnail size, you will see that they are very different. The females in the last row even look like a male in the reconstructed image. Imagine a case where you have to compare the features of the face in the third column, with a different picture of a suspect.

But there can be an even worse situation. I’ve stated that the use of this algorithm is not good because the faces used for training the network will be different from that of the suspect. So, what if we use pictures of the suspect for performing the training? It is very likely that the output of the algorithms will create an image similar to that of the suspect since the program has been optimized to do so. And this will be the case even if the suspect is not involved and the actual person in the image under analysis is another one. The wrong use of this technology could convict an innocent person.

Coming back to the features requests, when I explain my point of view to the users, they often reply that these kinds of techniques could be used just for the investigative phase, and not as actual evidence. But, in my opinion, this idea also has two problems:

  • It assumes that the user is careful in using the proper tool depending on the context. But who knows how this picture is used when in the wild? If we include these kinds of filters in Amped FIVE, are we sure the users will use the proper filters according to the case?Sometimes a user may not even know the use case for the picture, and they may be asked to enhance it, but may not know for which purpose.
  • Even if you know that the enhancement is needed only for investigative purposes, what do you do if this then turns into key “evidence”?

A few last words for the author of the “srez” project. I didn’t go in depth, but from what I see it’s very interesting and the results are phenomenal. The problem is that it is not conceived nor applicable to the forensics field. This is also true for many other algorithms that are published on the Internet or available in some products.

Just because something looks good, it does not mean it is good.