Category Archives: Tutorials

Using Snapshots in your Project

The ability to save a frame as a “Snapshot” has been a feature in Amped FIVE for quite some time. A simplified explanation of the use of Snapshots in interacting with third-party programs can be found here.

Today, I want to expand a bit on the use of Snapshots in your processing of video files.

There are often times that users have been asked to produce a BOLO flyer of multiple subjects and problems with the video file complicate the fulfillment of the request.

  • The subjects aren’t looking towards the camera at the same time / within the same frame.
  • There’s only one good frame of video to work with and you need to crop out multiple subjects.

Enter the Snapshot tool.

The Snapshot tool, on the Player Panel, saves the snapshot of the currently displayed image (frame) and its relative project.

When you Right Click on the button, a menu pops up.

The post linked above talks about working with the listed third-party tools. In this case, we’ll save the frame out, selecting a file type and manually enter an appropriate file name.

We can choose from a variety of file types. In most cases, analysts will choose a lossless format like TIFF.

The results, saved to the working folder, are the frame of video as a TIFF and its relative project file (.afp).

Working in this way, analysts can quickly and easily work with frames of interest separate from the video file. The same frame can be added to the project several times, repeated as necessary (in the case of cropping multiple subjects and objects from the same frame).

Amped FIVE is an amazingly flexible tool. The Snapshot tool, found in the Player Panel, provides yet another way to move frames of interest out of your project as files, or out to a third-party tool.

If you’d like more information about our tools and training options, contact us today.

Working Scientifically?

On Tuesday, May 22, I will be in Providence (RI, USA) at the Annual IACP Technology Conference to present a lecture. The topic, “Proprietary Video Files— The Science of Processing the Digital Crime Scene” is rather timely. Many years ago,  the US Federal Government responded to the NAS Report with the creation of the Organization of Scientific Area Committees for Forensic Science (OSAC). I happen to be a founding member of that group and currently serve as the Video Task Group chairperson within the Video / Imaging Technology and Analysis Subcommittee (VITAL). If one was to attempt to distill the reason for the creation of the OSAC and its on-going mission, it would be this: we were horrible at science, let’s fix that.

Since the founding of the OSAC, each Subcommittee has been busy collecting guidelines and best practices documents, refining them, and moving them to a “standards publishing body.” For Forensic Multimedia Analysis, that standards publishing body is the ASTM. The difference between a guideline / best practice and a standard is that the former tend towards generic helpful hints whilst the latter are specific and enforceable must do’s. In an accredited laboratory, if there is a standard practice for your discipline you must follow it. In your testimonial experience, you may be asked about the existence of standards and if your work conforms to them. As an example, in section 4 of ASTM 2825-12, it notes the requirement that your reporting of your work should act as a sort of recipe such that another analyst can reproduce your work. Whether used as bench notes, or included within your formal report, the reporting in Amped FIVE fully complies with this guidance. There is a standard out there, and we follow it.

Continue reading

What’s the Difference?

It was a slow week on one of the most active mailing lists in our field. Then, Friday came along and a list member asked the following question:

If I exported two copies of the same frame from some digital video as stills. Then slightly changed one. Something as small as changing one pixel by a single RBG value….so it is technically different…

… Does anyone know any software that could look at both images and then produce a third image that is designed to highlight the differences? In this case it would be one pixel …

To which, my colleague in the UK (Spready) quickly replied – Amped FIVE’s Video Mixer set to Absolute Difference. Ding! Ding! Ding! We have the winning answer! Let’s take a look at how to set up the examination, as well as what the results look like.

I’ve loaded an image into Amped FIVE twice. In the second instance of the file within the project, I’ve made a small local adjustment with the Levels filter. You can see the results of the adjustment in the above image.

With the images loaded and one of them adjusted, the Video Mixer, found in the Link filter group, is used to facilitate the difference examination.

On the Inputs tab of the Video Mixer’s Filter Settings, the First Input is set to the original image. The Second Input is set to the modified image, pointing to the Levels adjustment.

On the Blend tab of the Video Mixer’s Filter Settings, set the Mode to Absolute Difference.

Continue reading

The Amped FIVE Assistant Video Tutorial

We recently announced the release of the latest version of Amped FIVE (10039) where we introduced a new operational mode through a panel called the “Assistant”.

The Assistant provides a set of predefined workflows which can be used to automate common operations or guide new users, but it’s not obtrusive. You can use it or not, and you can always add filters or do anything, as usual, it’s just an additional option.

We’ve created a video tutorial so you can see it in action. See below or watch on YouTube now!

We’ll be adding more videos to our YouTube channel soon, so follow us to get more videos like this.

The Authenticate Countdown to Christmas

It’s beginning to look a lot like Christmas!

Christmas is coming! To celebrate that Christmas is almost here we will share a daily tip and trick on how to authenticate your digital photo evidence with Amped Authenticate.

Follow us daily on our social networks in the month of December as we open the 24 doors of our Authenticate Advent Calendar. The countdown starts now!

#AuthenticateChristmas

Follow us on Twitter, Facebook, LinkedIn, Google PlusYouTube

You can also visit our website daily as we open the doors of our Advent Calendar here.

The Sparse Selector

With over 100 filters and tools in Amped FIVE, it’s easy to lose track of which filter does what. A lot of folks pass right by the Sparse Selector, not knowing what it does or how to use it. The simple explanation of the Sparse Selector’s function is that it is a list of frames that are defined by the user. Another way of explaining its use: the Sparse Selector tool outputs multiple frames taken from random user selected positions of an input video.

How would that be helpful, you ask? Oh, it’s plenty helpful. Let me just say, it’s one of my favorite tools in FIVE. Here’s why.

#1. – Setting up a Frame Average

You want to resolve a license plate. You’ve identified 6 frames of interest where the location within the frame has original information that you’re going to frame average to attempt to accomplish your goal. Unfortunately, the frames are not sequentially located within the file. How do you select (easily / fast) only frames 125, 176, 222, 278 314, and 355? The Sparse Selector, that’s how.

Continue reading

Proving a negative

I have a dear old friend who is a brilliant photographer and artist. Years ago, when he was teaching at the Art Center College of Design in Pasadena, CA, he would occasionally ask me to substitute for him in class as he travelled the world to take photos. He would introduce me to the class as the person at the LAPD who authenticates digital media – the guy who inspects images for evidence of Photoshopping. Then, he’d say something to the effect that I would be judging their composites, so they’d better be good enough to fool me.

Last year, I wrote a bit about my experiences authenticating files for the City / County of Los Angeles. Today, I want to address a common misconception about authentication – proving a negative.

So many requests for authentication begin with the statement, “tell me if it’s been Photoshopped.” This request for a “blind authentication” asks the analyst to prove a negative. It’s a very tough request to fulfill.

In general, this could be obtained with a certain degree of certainty if the image is verified to be an original from a certain device, with no signs of recapture and, possibly verifying the consistency on the sensor noise pattern (PRNU).

However, it is very common nowadays to work on images that are not originals but have been shared on the web or through social media, usually multiple consecutive times. This implies that metadata and other information about the format are gone, and usually the traces of tampering – if any – have been covered by multiple steps of compression and resizing. So you know easily that the picture is not an original, but it’s very difficult to rely on pixel statistics to evaluate possible tampering at the visual level.

Here’s what the US evidence codes say about authentication (there are variations in other countries, but the basic concept holds):

  • It starts with the person submitting the item. They (attorney, witness, etc.) swear / affirm that the image accurately depicts what it’s supposed to depict – that it’s a contextually accurate representation of what’s at issue.
  • This process of swearing / affirming comes with a bit of jeopardy. One swears “under penalty of perjury.” Thus, the burden is on the person submitting the item to be absolutely sure the item is contextually accurate and not “Photoshopped” to change the context. If they’re proven to have committed perjury, there’s fines / fees and potentially jail time involved.
  • The person submits the file to support a claim. They swear / affirm, under penalty of perjury, that the file is authentic and accurately depicts the context of the claim.

Then, someone else cries foul. Someone else claims that the file has been altered in a specific way – item(s) deleted / added – scene cropped – etc.

It’s this specific allegation of forgery that is needed to test the claims. If there is no specific claim, then one is engaged in a “blind” authentication (attempting to prove a negative). Continue reading

Why PDF/A?

One of the more frustrating aspects of the forensic multimedia analyst’s world is dealing with legacy technology. You arrive at a crime scene to find a 15-year-old DVR that only accepts Iomega Zip disks, or CD+RW disks, or a certain size / speed of CF card. What do you do?

You curse and swear and scour your junk drawers. You call / email friends. You wonder why folks keep these systems knowing that there are newer / better / cheaper systems out there.

If you’ve ever worked a cold case, you know the problems interfacing with old technology. If you’re working at a large agency, chances are there are several old computer systems cobbled together with new middleware. Replacing systems is costly and time consuming.

For reports, agencies are faced with a similar problem. My old agency used a product from IBM that required a stand-alone program (PC only) to read / edit the reports when saved in the native format. That’s not at all helpful.

When generating a report in Amped FIVE, the user is given a choice in the production of the file between PDF, DOC, and HTML. Many states / jurisdictions require the user to output a PDF file for reports. But, PDF is a very robust standard with several variants. When generating PDF report files, it’s important to understand the variants and what they’re for.

According to the PDF Association, “PDF/A is an ISO-standardized version of the Portable Document Format (PDF) specialized for use in the archiving and long-term preservation of electronic documents. PDF/A differs from PDF by prohibiting features ill-suited to long-term archiving, such as font linking (as opposed to font embedding) and encryption.”

If you want to make sure that your report can be viewed now, and long into the future, by the largest group of people, choose PDF/A – the archival version of PDF. Understanding this, the report generated by FIVE is PDF/A compliant. We understand that many court systems and police agencies are standardized on this version of PDF because it’s not only built with the future in mind, it’s the cheapest to support.

Continue reading

The problems of the GAVC codec solved

In my years of working crime scenes in Los Angeles, I would often come across Geovision DVRs. They were usually met with a groan. Geovision’s codecs are problematic to deal with and don’t play nicely within analysts’ PCs.

With Amped FIVE, processing files from Geovision’s systems is easy. Plus, Amped FIVE has the tools needed to correct the problems presented by Geovision’s shortcuts.

Here’s an example of a workflow for handling an AVI file from Geovision, one that utilizes the GAVC codec.

If you have the GAVC codec installed, Amped FIVE will use it to attempt to display the video. You may notice immediately that the playback of the video isn’t working right. Not to worry, we’ll fix it. Within FIVE, select File>Convert DVR and set the controls to Raw (Uncompressed). When you click Apply, the file will be quickly converted.

Continue reading

PRNU-based Camera Identification in Amped Authenticate

Source device identification is a key task in digital image investigation. The goal is to link a digital image to the specific device that captured it, just like they do with bullets fired by a specific gun (indeed, image source device identification is also known as “image ballistics”).

The analysis of Photo Response Non-Uniformity (PRNU) noise is considered the prominent approach to accomplish this task. PRNU is a specific kind of noise introduced by the CMOS/CCD sensor of the camera and is considered to be unique to each sensor. Being a multiplicative noise, it cannot be effectively eliminated through internal processing, so it remains hidden in pixels, even after JPEG compression.

In order to test if an image comes from a given camera, first, we need to estimate the Camera Reference Pattern (CRP), characterizing the device. This is done by extracting the PRNU noise from many images captured by the camera and “averaging” it (let’s not dive too deep into the details). The reason for using several images is to get a more reliable estimate of the CRP, since separating PRNU noise from image content is not a trivial task, and we want to retain PRNU noise only.

After the CRP is computed and stored, we can extract the PRNU noise from a test image and “compare” it to the CRP: if the resulting value is over a given threshold, we say the image is compatible with the camera.

Camera identification through PRNU analysis has been part of Amped Authenticate for quite some time. However, many of our users told us that the filter was hard to configure, and results were not easy to interpret. So, since the end of last year, a new implementation of the algorithm was added (Authenticate Build 8782). The new features included:

Advanced image pre-processing during training
In order to lower false alarms probability, we implemented new filtering algorithms to remove artifacts that are not discriminative, something that is common with most digital cameras (e.g., artifacts due to Color Filter Array demosaicking interpolation).

Continue reading