Author Archives: Jim Hoerricks

Content Triage

Here in the US, we’re hyper-focused on standards and compliance. In the aftermath of the 2009 paper, Strengthening Forensic Science in the United States: A Path Forward, many national and state initiatives were put forward to address the issues raised in the document.

We love checklists. Yes, sometimes there’s a need to stray a bit from the workflow, but checklists help guide the work.

In our classes here, we present the workflow from the standpoint of science and the law. One of the most important steps in the beginning of the workflow is Content Triage.

Content Triage is the process of asking of one’s digital multimedia evidence, “do I have the appropriate quantity/quality of data to answer the questions in my case?”

If you do, great. Proceed with your work. If not, your results will be limited and those limitations should be noted in your report. An example of a limitation can be seen in the many files processed where the target area lacks sufficient resolution.

I’ve got a short video on this topic over on our YouTube page (click here).

I’ve been traveling the country speaking on this topic and its importance in investigations. My next stop will be at the Society for Integrity in Force Investigation and Reporting Annual Conference in Henderson, Nevada. You can get more info on this event over on our Events page. I hope to see you there.

Extracting Channels

If you’ve attended one of my classes or lectures, you’ve likely heard me say the following phrase many times, “There’s what you know, and there’s what you can prove.” The essence of this statement forms the basis of the Criminal Justice system as well as science.

What I “know” is subject to bias. What I “know” is found in the realm of truth. As a Kansas City Chiefs supporter, I “know” that the Oakland Raiders are a horrible team. I “know” that their fans are the worst in the world. After all, the Chiefs are the best and their fans are as pure as the wind-driven snow. This is “true” to me. Whilst funny and used to illustrate a point (I’m sure there are some really great people among the Raiders fan base), truths are things we “know.” Truths are rooted deep in feelings/emotions and unlikely to be changed by facts. There is a segment of the US population that believes it true that Elvis is still alive and that he’s likely hanging out on some Caribbean island with Tupac and Biggy Smalls.

Facts are measurable; they form the basis of tests of reliability. I can measure the temperature in a specific location and you, standing in the same location, can perform the same test and come to the same measurement. Supported by facts, our tests in this discipline become reliable, repeatable, and reproducible. Our conclusions can thus be trusted.

What on earth does this all have to do with Amped FIVE and Forensic Multimedia Analysis? I’m glad you asked.

By now, you’re well familiar with the fact that Amped Software operationalizes tools out of image science, math, statistics, etc. We also operationalize tools and training out of the world of psychology. By this I mean if we’re going to work in the visual world, we must know how that visual world operates not only from a mechanical standpoint but also from how the brain processes the inputs from its collection devices.

Continue reading

Using Snapshots in your Project

The ability to save a frame as a “Snapshot” has been a feature in Amped FIVE for quite some time. A simplified explanation of the use of Snapshots in interacting with third-party programs can be found here.

Today, I want to expand a bit on the use of Snapshots in your processing of video files.

There are often times that users have been asked to produce a BOLO flyer of multiple subjects and problems with the video file complicate the fulfillment of the request.

  • The subjects aren’t looking towards the camera at the same time / within the same frame.
  • There’s only one good frame of video to work with and you need to crop out multiple subjects.

Enter the Snapshot tool.

The Snapshot tool, on the Player Panel, saves the snapshot of the currently displayed image (frame) and its relative project.

When you Right Click on the button, a menu pops up.

The post linked above talks about working with the listed third-party tools. In this case, we’ll save the frame out, selecting a file type and manually enter an appropriate file name.

We can choose from a variety of file types. In most cases, analysts will choose a lossless format like TIFF.

The results, saved to the working folder, are the frame of video as a TIFF and its relative project file (.afp).

Working in this way, analysts can quickly and easily work with frames of interest separate from the video file. The same frame can be added to the project several times, repeated as necessary (in the case of cropping multiple subjects and objects from the same frame).

Amped FIVE is an amazingly flexible tool. The Snapshot tool, found in the Player Panel, provides yet another way to move frames of interest out of your project as files, or out to a third-party tool.

If you’d like more information about our tools and training options, contact us today.

Working Scientifically?

On Tuesday, May 22, I will be in Providence (RI, USA) at the Annual IACP Technology Conference to present a lecture. The topic, “Proprietary Video Files— The Science of Processing the Digital Crime Scene” is rather timely. Many years ago,  the US Federal Government responded to the NAS Report with the creation of the Organization of Scientific Area Committees for Forensic Science (OSAC). I happen to be a founding member of that group and currently serve as the Video Task Group chairperson within the Video / Imaging Technology and Analysis Subcommittee (VITAL). If one was to attempt to distill the reason for the creation of the OSAC and its on-going mission, it would be this: we were horrible at science, let’s fix that.

Since the founding of the OSAC, each Subcommittee has been busy collecting guidelines and best practices documents, refining them, and moving them to a “standards publishing body.” For Forensic Multimedia Analysis, that standards publishing body is the ASTM. The difference between a guideline / best practice and a standard is that the former tend towards generic helpful hints whilst the latter are specific and enforceable must do’s. In an accredited laboratory, if there is a standard practice for your discipline you must follow it. In your testimonial experience, you may be asked about the existence of standards and if your work conforms to them. As an example, in section 4 of ASTM 2825-12, it notes the requirement that your reporting of your work should act as a sort of recipe such that another analyst can reproduce your work. Whether used as bench notes, or included within your formal report, the reporting in Amped FIVE fully complies with this guidance. There is a standard out there, and we follow it.

Continue reading

What’s the Difference?

It was a slow week on one of the most active mailing lists in our field. Then, Friday came along and a list member asked the following question:

If I exported two copies of the same frame from some digital video as stills. Then slightly changed one. Something as small as changing one pixel by a single RBG value….so it is technically different…

… Does anyone know any software that could look at both images and then produce a third image that is designed to highlight the differences? In this case it would be one pixel …

To which, my colleague in the UK (Spready) quickly replied – Amped FIVE’s Video Mixer set to Absolute Difference. Ding! Ding! Ding! We have the winning answer! Let’s take a look at how to set up the examination, as well as what the results look like.

I’ve loaded an image into Amped FIVE twice. In the second instance of the file within the project, I’ve made a small local adjustment with the Levels filter. You can see the results of the adjustment in the above image.

With the images loaded and one of them adjusted, the Video Mixer, found in the Link filter group, is used to facilitate the difference examination.

On the Inputs tab of the Video Mixer’s Filter Settings, the First Input is set to the original image. The Second Input is set to the modified image, pointing to the Levels adjustment.

On the Blend tab of the Video Mixer’s Filter Settings, set the Mode to Absolute Difference.

Continue reading

Identify Social Media Files with Amped Authenticate

Amped Authenticate Update 10641 introduced the new Social Media Identification filter. It can be found in the File Analysis filter group.

The filters in the File Analysis group are generally looking at the file’s container to return relevant information about the file. The Social Media Identification filter examines the file for traces of information that may indicate the file’s social media source. The key word here is “may.”

The workflow that I will explain here is typical in the US and Canada. Take from it what you need in order to apply it to your country’s legal system.

Let’s begin.

Continue reading

Amped DVRConv for transcription?

In our most recent update of Amped DVRConv, we added the ability to separate the audio and video streams in your DME files – to save the audio as a separate file. For some, this functionality went unnoticed. For others, it was a huge deal.

Two very specific use cases required this functionality. You asked. We delivered.

Case #1 – Child Exploitation/Human Trafficking

Agencies responsible for investigating cases of child exploitation/human trafficking were spending a lot of time redacting video files (blurring faces and other sensitive information) in order to send files off for audio transcription. The distribution of files in child exploitation cases (files that can be considered child pornography) for transcription is now made a lot easier with DVRConv. All of the evidentiary videos can be loaded into the tool and processed without having to view the footage. DVRConv helps to dramatically speed up the process of getting files to transcription whilst protecting identities and shielding staff from the harmful psychological and legal effects of viewing/distributing such material.

Case #2 – Police Generated Video

Agencies that have deployed body worn/vehicle-based cameras or have interview room recorders often have to send the resulting video files to outside companies for transcription. Like the case above, they are faced with having to redact the visual information prior to releasing the files to their contractor. Even if the agency has chosen a CJIS compliant transcription contractor, they may have agency policies that require the redaction of the visual information prior to release. DVRConv eliminates the need to perform a visual redaction ahead of such a release of files. Having this ability is already saving agencies a tremendous amount of time/money.

Users of DVRConv do not require specialized training. The tool can be used by anyone. It’s drag-drop easy. Plus, the settings can be configured so that the resulting audio file meets the requirements of your transcription vendor.

If you’d like to know more about Amped DVRConv, or any of our other Amped Software products and training options, contact us today.

The Sparse Selector

With over 100 filters and tools in Amped FIVE, it’s easy to lose track of which filter does what. A lot of folks pass right by the Sparse Selector, not knowing what it does or how to use it. The simple explanation of the Sparse Selector’s function is that it is a list of frames that are defined by the user. Another way of explaining its use: the Sparse Selector tool outputs multiple frames taken from random user selected positions of an input video.

How would that be helpful, you ask? Oh, it’s plenty helpful. Let me just say, it’s one of my favorite tools in FIVE. Here’s why.

#1. – Setting up a Frame Average

You want to resolve a license plate. You’ve identified 6 frames of interest where the location within the frame has original information that you’re going to frame average to attempt to accomplish your goal. Unfortunately, the frames are not sequentially located within the file. How do you select (easily / fast) only frames 125, 176, 222, 278 314, and 355? The Sparse Selector, that’s how.

Continue reading

Proving a negative

I have a dear old friend who is a brilliant photographer and artist. Years ago, when he was teaching at the Art Center College of Design in Pasadena, CA, he would occasionally ask me to substitute for him in class as he travelled the world to take photos. He would introduce me to the class as the person at the LAPD who authenticates digital media – the guy who inspects images for evidence of Photoshopping. Then, he’d say something to the effect that I would be judging their composites, so they’d better be good enough to fool me.

Last year, I wrote a bit about my experiences authenticating files for the City / County of Los Angeles. Today, I want to address a common misconception about authentication – proving a negative.

So many requests for authentication begin with the statement, “tell me if it’s been Photoshopped.” This request for a “blind authentication” asks the analyst to prove a negative. It’s a very tough request to fulfill.

In general, this could be obtained with a certain degree of certainty if the image is verified to be an original from a certain device, with no signs of recapture and, possibly verifying the consistency on the sensor noise pattern (PRNU).

However, it is very common nowadays to work on images that are not originals but have been shared on the web or through social media, usually multiple consecutive times. This implies that metadata and other information about the format are gone, and usually the traces of tampering – if any – have been covered by multiple steps of compression and resizing. So you know easily that the picture is not an original, but it’s very difficult to rely on pixel statistics to evaluate possible tampering at the visual level.

Here’s what the US evidence codes say about authentication (there are variations in other countries, but the basic concept holds):

  • It starts with the person submitting the item. They (attorney, witness, etc.) swear / affirm that the image accurately depicts what it’s supposed to depict – that it’s a contextually accurate representation of what’s at issue.
  • This process of swearing / affirming comes with a bit of jeopardy. One swears “under penalty of perjury.” Thus, the burden is on the person submitting the item to be absolutely sure the item is contextually accurate and not “Photoshopped” to change the context. If they’re proven to have committed perjury, there’s fines / fees and potentially jail time involved.
  • The person submits the file to support a claim. They swear / affirm, under penalty of perjury, that the file is authentic and accurately depicts the context of the claim.

Then, someone else cries foul. Someone else claims that the file has been altered in a specific way – item(s) deleted / added – scene cropped – etc.

It’s this specific allegation of forgery that is needed to test the claims. If there is no specific claim, then one is engaged in a “blind” authentication (attempting to prove a negative). Continue reading

Why PDF/A?

One of the more frustrating aspects of the forensic multimedia analyst’s world is dealing with legacy technology. You arrive at a crime scene to find a 15-year-old DVR that only accepts Iomega Zip disks, or CD+RW disks, or a certain size / speed of CF card. What do you do?

You curse and swear and scour your junk drawers. You call / email friends. You wonder why folks keep these systems knowing that there are newer / better / cheaper systems out there.

If you’ve ever worked a cold case, you know the problems interfacing with old technology. If you’re working at a large agency, chances are there are several old computer systems cobbled together with new middleware. Replacing systems is costly and time consuming.

For reports, agencies are faced with a similar problem. My old agency used a product from IBM that required a stand-alone program (PC only) to read / edit the reports when saved in the native format. That’s not at all helpful.

When generating a report in Amped FIVE, the user is given a choice in the production of the file between PDF, DOC, and HTML. Many states / jurisdictions require the user to output a PDF file for reports. But, PDF is a very robust standard with several variants. When generating PDF report files, it’s important to understand the variants and what they’re for.

According to the PDF Association, “PDF/A is an ISO-standardized version of the Portable Document Format (PDF) specialized for use in the archiving and long-term preservation of electronic documents. PDF/A differs from PDF by prohibiting features ill-suited to long-term archiving, such as font linking (as opposed to font embedding) and encryption.”

If you want to make sure that your report can be viewed now, and long into the future, by the largest group of people, choose PDF/A – the archival version of PDF. Understanding this, the report generated by FIVE is PDF/A compliant. We understand that many court systems and police agencies are standardized on this version of PDF because it’s not only built with the future in mind, it’s the cheapest to support.

Continue reading