We work in the field of forensic video analysis, which is generally intended as the analysis of the images themselves and their context in a legal setting. For this reason, our customers often ask us if our products are valid for court use and if they have been validated and certified. We have written this post as an answer to the most common questions related to this topic.
You can also download this as a PDF document here.
What are the scientific foundations of Amped Software products?
All the processes implemented in our software follow the principles of scientific methodology. Any process follows these basic principles:
- Accuracy (Reliability): our tools and training program help users avoid processing errors caused by the implementation of an inappropriate tool or workflow and help mitigate the impact of human factors / bias.
- Repeatability: the same process, executed by the same user at a different time, must lead to the same result. The project format in Amped FIVE, for example, does not save any image data. Every time a project is reopened, all the processing happens again starting from the original data. In the event that a project file is lost or as a part of a validation or other test scenario, the same user can repeat the steps and settings, guided by the tool’s report, and achieve the same results.
- Reproducibility: another user with the proper competency, should be able to reproduce the same results. Amped FIVE generates a complete report detailing all the steps of the processing, the settings / parameters applied, a description of the algorithms employed in the processing and the scientific references for those algorithms (when applicable). In this way, another user, with a different tool set or by implementing the same algorithms, should be able to reproduce the same results. Given the huge number of implementation details and possible differences, it is not expected to produce a bit by bit copy of the results, but only to produce an image of similar informative content.
Additionally, we apply strict due diligence on the applicability of the algorithms for the forensic environment. Not every algorithm is, in fact, properly applicable in a forensic science setting. We cannot use algorithms which have a random component because they would not be reproducible and repeatable (when we do, we set a fixed seed for the random number generation) and we cannot use algorithms which “add” external data to the original, for example improving the quality of a face with information added from an average face. All information is derived from the actual evidence file.
We employ algorithms which have been validated by the scientific community through peer review, such as university textbooks, scientific publications, or conference papers. If for some specific task, there are not good enough algorithms available or we need to adapt existing algorithms, we describe the algorithm and attempt to publish them in scientific journals. Continue reading
One of the things that fascinate me the most in forensic video analysis is the relation between the subjective digital data and the objective human interpretation involved in any investigation. Psychological biases and the fallacies of human perceptions easily verifiable with any of the popular optical illusions are just some of the factors which must be taken into account while doing investigations.
But this time I want to look at things from a higher level and talk about the usefulness of video as evidence and our perception of it. Chances are you have already seen the very interesting article: “The Value of CCTV Surveillance Cameras as an Investigative Tool: An Empirical Analysis” (link).
The abstract provides some impressive numbers: “This study sought to establish how often CCTV provides useful evidence and how this is affected by circumstances, analysing 251,195 crimes recorded by British Transport Police that occurred on the British railway network between 2011 and 2015. CCTV was available to investigators in 45% of cases and judged to be useful in 29% (65% of cases in which it was available).”
For reference, this is the decision workflow used in the classification (image from the above paper).
This really made me feel good. It looks like what we are doing here at Amped Software is having an impact on society, and more than we expected. I think most people in our community would be surprised by the numbers. At Amped, we see hundreds of cases every year, and for more than half of the images and videos that we receive, we just say that they are useless. Continue reading
In the last few days, there’s been a lot of noise about the latest impressive research by Google. This is a selection of articles with bombastic titles:
The actual research article by Google is available here.
First of all, let me say that technically, the results are amazing. But this system is not simply an image enhancement or restoration tool. It is creating new images based on a best guess, which may look similar but also completely different than the actual data originally captured. Continue reading
I assume most of the readers of this blog are video / photo / gadget / phone / camera geeks. I am sure you didn’t miss the reviews of the latest Apple iPhone 7 Plus and Google Pixel phones. They have a lot in common, but there is one major aspect that is interesting for our applications: things are slowly moving from photography to computational photography. We are no longer just capturing light coming from optics and applying some minor processing to the pixel values to make the picture more pleasant to the viewer.
Phones must be slim and light and yet we still expect to have near DLSR quality. So, now computational photography comes into play. The iPhone 7 Plus, for example, uses two different cameras to calculate a depth of field and then tries to simulate the “bokeh” effect via software you would normally get in bulky professional cameras, by using fast optics at a wide aperture.
On the other side, when you hit the button on the Pixel phone, it is capturing a bunch of pictures and then decides what to keep from every picture in order to give the user the final result.
This challenges the concepts of originality and authenticity. The light captured by the camera is no longer the output of the photography process, but just the first step of a more complex process based on a multitude of factors. There is little doubt that this is just the beginning of a trend which will explode in the next few years. Continue reading
We receive quite a few phone calls and e-mails from well-meaning customers wanting us to “crack” secure in-car or body worn camera video files. They get frustrated because our conversion tools in Amped FIVE or Amped DVRConv don’t “crack” files for them. The easiest explanation is that our tools aren’t designed to defeat security.
Our customers often ask us for specific functions and filters to be included in our software. Paying attention to user requests and managing our development priorities according to them is probably one of the things Amped Software is best known for.
However, not all requests, even if technically feasible, are a match for the purpose of our solutions. Take for example Amped FIVE: some of the most common use cases are enhancing license plates or faces.
Quite a few customers asked for some functions to perform super resolution of faces from a single image. While this may be technically very interesting, most of the implementations have a fatal flaw that prevents them from being used for forensic applications: they introduce data external to the case.
Yesterday a GitHub project called “srez” caught my attention.
Image super-resolution through deep learning. This project uses deep learning to upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.
Here’s an random, non cherry-picked, example of what this network can do. From left to right, the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.
After the attacks of Paris, Brussels (and unfortunately many others), three days ago there was another major event at Istanbul Airport. While its origin is yet to be officially confirmed, strong hints are again at ISIS terror strategy. The number of victims is currently set at more than 40 and growing, with more than 160 persons injured. This is, again and again, a very sad story and our prayers are with the victims, the wounded and their family.
As usual, in these major events, it is interesting to analyze the different audio and video sources and their use. Continue reading
We decided to post this in light of a recent case which showed the complexities of image authentication and in general how to work properly according to requirements of US courts.
Without getting too political, folks in the US are used to getting citable case law from the coasts while the heartland and the South skate by relatively unnoticed. Brady v Maryland, Melendez-Diaz v Massachusetts, Beckley v California illustrate this point. A recent case from Louisiana changed that trend. Last month, the Louisiana appellate court, 4th Circuit, issued a written opinion in a criminal case disallowing key piece of social media evidence due to a lack of authenticity. (State of Louisiana v. Demontre Smith, La. Court of Appeals, April 20, 2016) Continue reading
Things are getting awful. Last Friday’s events brought us many deaths and injured people in Paris. Our prayers are with the families of the victims and the population of France. The situation in Paris is sorrowful. Our colleague Matthew is traveling today to Paris for Milipol. Despite the situation, the organizers confirmed that the event will take place in any case, with additional security measures.
Being at the heart of Europe, all the media coverage these days is about Paris, but we must not forget that this is, sadly, just one of the very unfortunate events that are happening. The same day, similar events killed 26 people and injured more than 60 in Baghdad, and the day before, at least 43 were killed and more than 200 injured in Beirut. And let’s not forget the massacre in Nigeria which killed 2000 people at the beginning of the year.
A problem of credibility
Unfortunately, with the current times, it is becoming easier to become a journalist and more difficult to become a good journalist. There has been a lot of information in the news that is reported to be wrong. Anti-spoof websites are uncovering the truth. But who’s checking the correctness of these sources (and of the spoof-hunters)? Very often at the center of the discussion there are pictures or videos. Continue reading
In our previous posts, we have mainly focused on technical issues, but to do this topic justice, we need to address the social and ethical issues as well.
Trying to predict how the use of BWC technology will impact society and ethics, in general, is very difficult, but we can ask a few questions that can stimulate thought on the subject:
- When should these cameras be deployed or how invasive should they be permitted to be?
- Can an individual request the officer to turn off his camera in his own home or should the officer be allowed to overrule that choice if he feels it could provide a benefit in safety for one or both parties?
- Would that individual be given access to that video? And if so, how will that data sharing take place and how much would it cost?
- Will the police use of this technology set a social precedent and will we see this technology spread as a result?
- How will access to all this data change the way we feel about privacy in general?