Category Archives: Cases

Investigating Image Authenticity

This article, published in Evidence Technology Magazine, takes a look at two cases involving the authentication of digital images and the importance of the questions asked of the analyst during those investigations. It looks at how authentication software, such as Amped Authenticate has been designed with a structured workflow, to locate the puzzle pieces required to assist in answering those questions.

Read the full article here.

Retrieving Evidence from CCTV

Acquiring evidence from a digital camera or a smartphone is more or less relatively easy to do. Images are usually in standard JPEG format and videos in MP4 or some other format that most players can read. But what is the best way to retrieve and handle CCTV footage to ensure it stands up to the scrutiny in the courtroom? There are numerous possibilities and it depends on where the video is actually recorded.

To learn more, read the article by Martino Jerian, Amped CEO and Founder, published in Lawyer Monthly.

The Amped FIVE Assistant Video Tutorial

We recently announced the release of the latest version of Amped FIVE (10039) where we introduced a new operational mode through a panel called the “Assistant”.

The Assistant provides a set of predefined workflows which can be used to automate common operations or guide new users, but it’s not obtrusive. You can use it or not, and you can always add filters or do anything, as usual, it’s just an additional option.

We’ve created a video tutorial so you can see it in action. See below or watch on YouTube now!

We’ll be adding more videos to our YouTube channel soon, so follow us to get more videos like this.

The Sparse Selector

With over 100 filters and tools in Amped FIVE, it’s easy to lose track of which filter does what. A lot of folks pass right by the Sparse Selector, not knowing what it does or how to use it. The simple explanation of the Sparse Selector’s function is that it is a list of frames that are defined by the user. Another way of explaining its use: the Sparse Selector tool outputs multiple frames taken from random user selected positions of an input video.

How would that be helpful, you ask? Oh, it’s plenty helpful. Let me just say, it’s one of my favorite tools in FIVE. Here’s why.

#1. – Setting up a Frame Average

You want to resolve a license plate. You’ve identified 6 frames of interest where the location within the frame has original information that you’re going to frame average to attempt to accomplish your goal. Unfortunately, the frames are not sequentially located within the file. How do you select (easily / fast) only frames 125, 176, 222, 278 314, and 355? The Sparse Selector, that’s how.

Continue reading

Proving a negative

I have a dear old friend who is a brilliant photographer and artist. Years ago, when he was teaching at the Art Center College of Design in Pasadena, CA, he would occasionally ask me to substitute for him in class as he travelled the world to take photos. He would introduce me to the class as the person at the LAPD who authenticates digital media – the guy who inspects images for evidence of Photoshopping. Then, he’d say something to the effect that I would be judging their composites, so they’d better be good enough to fool me.

Last year, I wrote a bit about my experiences authenticating files for the City / County of Los Angeles. Today, I want to address a common misconception about authentication – proving a negative.

So many requests for authentication begin with the statement, “tell me if it’s been Photoshopped.” This request for a “blind authentication” asks the analyst to prove a negative. It’s a very tough request to fulfill.

In general, this could be obtained with a certain degree of certainty if the image is verified to be an original from a certain device, with no signs of recapture and, possibly verifying the consistency on the sensor noise pattern (PRNU).

However, it is very common nowadays to work on images that are not originals but have been shared on the web or through social media, usually multiple consecutive times. This implies that metadata and other information about the format are gone, and usually the traces of tampering – if any – have been covered by multiple steps of compression and resizing. So you know easily that the picture is not an original, but it’s very difficult to rely on pixel statistics to evaluate possible tampering at the visual level.

Here’s what the US evidence codes say about authentication (there are variations in other countries, but the basic concept holds):

  • It starts with the person submitting the item. They (attorney, witness, etc.) swear / affirm that the image accurately depicts what it’s supposed to depict – that it’s a contextually accurate representation of what’s at issue.
  • This process of swearing / affirming comes with a bit of jeopardy. One swears “under penalty of perjury.” Thus, the burden is on the person submitting the item to be absolutely sure the item is contextually accurate and not “Photoshopped” to change the context. If they’re proven to have committed perjury, there’s fines / fees and potentially jail time involved.
  • The person submits the file to support a claim. They swear / affirm, under penalty of perjury, that the file is authentic and accurately depicts the context of the claim.

Then, someone else cries foul. Someone else claims that the file has been altered in a specific way – item(s) deleted / added – scene cropped – etc.

It’s this specific allegation of forgery that is needed to test the claims. If there is no specific claim, then one is engaged in a “blind” authentication (attempting to prove a negative). Continue reading

The problems of the GAVC codec solved

In my years of working crime scenes in Los Angeles, I would often come across Geovision DVRs. They were usually met with a groan. Geovision’s codecs are problematic to deal with and don’t play nicely within analysts’ PCs.

With Amped FIVE, processing files from Geovision’s systems is easy. Plus, Amped FIVE has the tools needed to correct the problems presented by Geovision’s shortcuts.

Here’s an example of a workflow for handling an AVI file from Geovision, one that utilizes the GAVC codec.

If you have the GAVC codec installed, Amped FIVE will use it to attempt to display the video. You may notice immediately that the playback of the video isn’t working right. Not to worry, we’ll fix it. Within FIVE, select File>Convert DVR and set the controls to Raw (Uncompressed). When you click Apply, the file will be quickly converted.

Continue reading

What’s in a name? How to rename in Amped FIVE

I’ve been on the road a lot lately. By the end of this month, I’ll have spent two weeks with District Attorney’s Offices in New Jersey (US) training users in the many uses of Amped’s flagship product, Amped FIVE. Every user has a slightly different use case and needs. Prosecutors’ Offices are no different.

Field personnel / crime scene technicians / analysts might not be very concerned with trail prep and the creation of demonstratives for court. But, DA’s offices are. That being said, there are a few things that every user of Amped FIVE can do – beginning with the end in mind – to make the trial prep job a bit easier.

Hopefully, by now you know that you can rename processing chains in Amped FIVE to aid in your organization.

Right click on the Chain and select Rename Chain. Then, name it something unique that describes what you’re working with or the question you’re trying to answer in the file. Examples include camera number, vehicle determination, license plate determination, etc.

This is quite helpful. But, did you know that you can also rename the Bookmarks? Additionally, you can add a description to the bookmark. Let’s see what this looks like.

Continue reading

Cowboys versus Bureaucrats: Attitude and Tools

There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.

There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.

There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.

What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:

  • To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
  • Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
  • He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
  • He will use whatever result “looks good” without questioning its validity or reliability.
  • If asked to document and repeat his work in detail he’ll be in deep trouble.
  • If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
  • When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
  • When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
  • When sending around results to colleagues he just pastes processed snapshots into Word documents.
  • When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.

Continue reading

To seize or to retrieve: that is the question

A crime occurs and is “witnessed” by a digital CCTV system. The files that your investigation wants/needs are in the system’s recording device (DVR). What do you do to get them? Do you seize the entire DVR as evidence (“bag and tag”)? Do you try to access the recorder through its user interface and download/export/save the files to USB stick/drive or other removable media?

Answer: it depends.

There are times when you’d want to seize the DVR. Perhaps 5% of cases will present a situation where having the DVR in the lab is necessary:

  • Arsons/fires can turn a DVR into a bunch of melted down parts. You’re obviously not going to power up a melted DVR.
  • An analysis that tests how the DVR performs and creates files. For example, does the frame timing represent the actual elapsed time or how the DVR fit that time into its container? Such tests of reliability will require access to the DVR throughout the legal process.
  • Content analysis questions where there’s a difference of opinion between object/artifact. For example, is it a white sticker on the back of a car or an artifact of compression (random bit of noise)?

If you’re taking a DVR from a location, you can follow the guidance of the computer forensics world on handling the DVR (which is a computer) and properly removing it from the scene.

Continue reading

What’s wrong with this video?

What’s wrong with this video? Hint: look at the Inspector’s results for width / height.

Unfortunately, the answer in many people’s minds is …. nothing. I can’t begin to count the number of videos and images in BOLOs that attempt to depict a scene that looks quite like the one above. If you don’t know what you’re looking at, it’s hard to say what’s actually wrong with this video.

Continue reading