In my years of working crime scenes in Los Angeles, I would often come across Geovision DVRs. They were usually met with a groan. Geovision’s codecs are problematic to deal with and don’t play nicely within analysts’ PCs.
With Amped FIVE, processing files from Geovision’s systems is easy. Plus, Amped FIVE has the tools needed to correct the problems presented by Geovision’s shortcuts.
Here’s an example of a workflow for handling an AVI file from Geovision, one that utilizes the GAVC codec.
If you have the GAVC codec installed, Amped FIVE will use it to attempt to display the video. You may notice immediately that the playback of the video isn’t working right. Not to worry, we’ll fix it. Within FIVE, select File>Convert DVR and set the controls to Raw (Uncompressed). When you click Apply, the file will be quickly converted.
I’ve been on the road a lot lately. By the end of this month, I’ll have spent two weeks with District Attorney’s Offices in New Jersey (US) training users in the many uses of Amped’s flagship product, Amped FIVE. Every user has a slightly different use case and needs. Prosecutors’ Offices are no different.
Field personnel / crime scene technicians / analysts might not be very concerned with trail prep and the creation of demonstratives for court. But, DA’s offices are. That being said, there are a few things that every user of Amped FIVE can do – beginning with the end in mind – to make the trial prep job a bit easier.
Hopefully, by now you know that you can rename processing chains in Amped FIVE to aid in your organization.
Right click on the Chain and select Rename Chain. Then, name it something unique that describes what you’re working with or the question you’re trying to answer in the file. Examples include camera number, vehicle determination, license plate determination, etc.
This is quite helpful. But, did you know that you can also rename the Bookmarks? Additionally, you can add a description to the bookmark. Let’s see what this looks like.
There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.
There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.
There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.
What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:
- To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
- Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
- He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
- He will use whatever result “looks good” without questioning its validity or reliability.
- If asked to document and repeat his work in detail he’ll be in deep trouble.
- If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
- When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
- When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
- When sending around results to colleagues he just pastes processed snapshots into Word documents.
- When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.
A crime occurs and is “witnessed” by a digital CCTV system. The files that your investigation wants/needs are in the system’s recording device (DVR). What do you do to get them? Do you seize the entire DVR as evidence (“bag and tag”)? Do you try to access the recorder through its user interface and download/export/save the files to USB stick/drive or other removable media?
Answer: it depends.
There are times when you’d want to seize the DVR. Perhaps 5% of cases will present a situation where having the DVR in the lab is necessary:
- Arsons/fires can turn a DVR into a bunch of melted down parts. You’re obviously not going to power up a melted DVR.
- An analysis that tests how the DVR performs and creates files. For example, does the frame timing represent the actual elapsed time or how the DVR fit that time into its container? Such tests of reliability will require access to the DVR throughout the legal process.
- Content analysis questions where there’s a difference of opinion between object/artifact. For example, is it a white sticker on the back of a car or an artifact of compression (random bit of noise)?
If you’re taking a DVR from a location, you can follow the guidance of the computer forensics world on handling the DVR (which is a computer) and properly removing it from the scene.
What’s wrong with this video? Hint: look at the Inspector’s results for width / height.
Unfortunately, the answer in many people’s minds is …. nothing. I can’t begin to count the number of videos and images in BOLOs that attempt to depict a scene that looks quite like the one above. If you don’t know what you’re looking at, it’s hard to say what’s actually wrong with this video.
We’re back from the Axon Accelerate Conference. What an incredible experience to meet so many law enforcement professionals who are enthusiastic about going from Capture to the Courtroom with reliable tools based in science and fact, not tools repurposed from the art world.
I’d like to share today the answer to a question posed to us at the Conference. The question was, “how do you quickly get rid of that annoying orange color cast that you find in images / videos taken in underground locations or grow houses.”
The answer is the Temperature Tint filter (found in the Adjust filter group). But, before we look at the filter and how it works, let’s talk about about Colour Temperature.
The chart above is from my old book, Forensic Photoshop. It’s helpful to look at colour temperature from the standpoint of the Sun as it rises – the horizon going from warm to cool. Another way to look at colour temperature is with the chart below that places temperature (the Planckian locus in Kelvin) as it relates to the CIE XYZ Color Space.
If you’ve wondered at the filters in the Extract Filter group and asked yourself, what are these for, you’re not alone. Depending on your specific use case with Amped FIVE, there are likely a few filters for which you have no use in your current context. Others, you may use in a very specific way each time – but others may use them differently.
Thus it is that I encountered a request for a feature that’s been in Amped FIVE for quite some time. I’ve responded to the request with details on how to accomplish the task. Now, I’ll expand on the question and share a more detailed look at an often overlooked filter – Add Text. (click on the images to see the full-res versions)
We work in the field of forensic video analysis, which is generally intended as the analysis of the images themselves and their context in a legal setting. For this reason, our customers often ask us if our products are valid for court use and if they have been validated and certified. We have written this post as an answer to the most common questions related to this topic.
You can also download this as a PDF document here.
What are the scientific foundations of Amped Software products?
All the processes implemented in our software follow the principles of scientific methodology. Any process follows these basic principles:
- Accuracy (Reliability): our tools and training program help users avoid processing errors caused by the implementation of an inappropriate tool or workflow and help mitigate the impact of human factors / bias.
- Repeatability: the same process, executed by the same user at a different time, must lead to the same result. The project format in Amped FIVE, for example, does not save any image data. Every time a project is reopened, all the processing happens again starting from the original data. In the event that a project file is lost or as a part of a validation or other test scenario, the same user can repeat the steps and settings, guided by the tool’s report, and achieve the same results.
- Reproducibility: another user with the proper competency, should be able to reproduce the same results. Amped FIVE generates a complete report detailing all the steps of the processing, the settings / parameters applied, a description of the algorithms employed in the processing and the scientific references for those algorithms (when applicable). In this way, another user, with a different tool set or by implementing the same algorithms, should be able to reproduce the same results. Given the huge number of implementation details and possible differences, it is not expected to produce a bit by bit copy of the results, but only to produce an image of similar informative content.
Additionally, we apply strict due diligence on the applicability of the algorithms for the forensic environment. Not every algorithm is, in fact, properly applicable in a forensic science setting. We cannot use algorithms which have a random component because they would not be reproducible and repeatable (when we do, we set a fixed seed for the random number generation) and we cannot use algorithms which “add” external data to the original, for example improving the quality of a face with information added from an average face. All information is derived from the actual evidence file.
We employ algorithms which have been validated by the scientific community through peer review, such as university textbooks, scientific publications, or conference papers. If for some specific task, there are not good enough algorithms available or we need to adapt existing algorithms, we describe the algorithm and attempt to publish them in scientific journals. Continue reading
One of the things that fascinate me the most in forensic video analysis is the relation between the subjective digital data and the objective human interpretation involved in any investigation. Psychological biases and the fallacies of human perceptions easily verifiable with any of the popular optical illusions are just some of the factors which must be taken into account while doing investigations.
But this time I want to look at things from a higher level and talk about the usefulness of video as evidence and our perception of it. Chances are you have already seen the very interesting article: “The Value of CCTV Surveillance Cameras as an Investigative Tool: An Empirical Analysis” (link).
The abstract provides some impressive numbers: “This study sought to establish how often CCTV provides useful evidence and how this is affected by circumstances, analysing 251,195 crimes recorded by British Transport Police that occurred on the British railway network between 2011 and 2015. CCTV was available to investigators in 45% of cases and judged to be useful in 29% (65% of cases in which it was available).”
For reference, this is the decision workflow used in the classification (image from the above paper).
This really made me feel good. It looks like what we are doing here at Amped Software is having an impact on society, and more than we expected. I think most people in our community would be surprised by the numbers. At Amped, we see hundreds of cases every year, and for more than half of the images and videos that we receive, we just say that they are useless. Continue reading
A crime has occurred. Your investigators comb the area looking for clues. Your media relations staff hit the airwaves asking for the public’s help. Your social media cyber team trolls the Internet for images taken about the time of the crime and in the general location.
An image shows up on social media that was taken a few minutes before the crime occurred, looking down the street at what is now your crime scene. But, what’s wrong with this picture?
Taken into the setting sun, the features of the scene are back-lit. Useful information is lost.
Or is it? Continue reading