Category Archives: Authenticate

Investigating Image Authenticity

This article, published in Evidence Technology Magazine, takes a look at two cases involving the authentication of digital images and the importance of the questions asked of the analyst during those investigations. It looks at how authentication software, such as Amped Authenticate has been designed with a structured workflow, to locate the puzzle pieces required to assist in answering those questions.

Read the full article here.

Only a matter of time until fake evidence leads to false convictions

With the rise of the digital age can experts trust that photographic evidence is legitimate?

Sophie Garrod, from Police Oracle, writes about how a growing number of forensic and counter-terrorism units are getting on board with pioneering image authentication software.

Approximately a third of UK forces have invested in Amped Software products – including Amped Authenticate, an all in one computer programme which can detect doctored images.

Forensic image departments, counter-terrorism units, and government departments say they are saving time and money by sending detectives on a short training course in the software.

Read the full article here to learn more.

The Authenticate Countdown to Christmas

It’s beginning to look a lot like Christmas!

Christmas is coming! To celebrate that Christmas is almost here we will share a daily tip and trick on how to authenticate your digital photo evidence with Amped Authenticate.

Follow us daily on our social networks in the month of December as we open the 24 doors of our Authenticate Advent Calendar. The countdown starts now!

#AuthenticateChristmas

Follow us on Twitter, Facebook, LinkedIn, Google PlusYouTube

You can also visit our website daily as we open the doors of our Advent Calendar here.

Seeing Beyond the Image

Martino Jerian, Amped CEO and Founder, examines context, content, and format of images. From the images and the context in which they are used we can obtain a lot of information that is not visible with the naked eye, and for what is visible with the naked eye, can we trust it? The process of authenticating an image is a mix of technical and investigative elements. This article looks at how to perform a complete image analysis.

Read the article published in the Digital Forensics magazine.

Proving a negative

I have a dear old friend who is a brilliant photographer and artist. Years ago, when he was teaching at the Art Center College of Design in Pasadena, CA, he would occasionally ask me to substitute for him in class as he travelled the world to take photos. He would introduce me to the class as the person at the LAPD who authenticates digital media – the guy who inspects images for evidence of Photoshopping. Then, he’d say something to the effect that I would be judging their composites, so they’d better be good enough to fool me.

Last year, I wrote a bit about my experiences authenticating files for the City / County of Los Angeles. Today, I want to address a common misconception about authentication – proving a negative.

So many requests for authentication begin with the statement, “tell me if it’s been Photoshopped.” This request for a “blind authentication” asks the analyst to prove a negative. It’s a very tough request to fulfill.

In general, this could be obtained with a certain degree of certainty if the image is verified to be an original from a certain device, with no signs of recapture and, possibly verifying the consistency on the sensor noise pattern (PRNU).

However, it is very common nowadays to work on images that are not originals but have been shared on the web or through social media, usually multiple consecutive times. This implies that metadata and other information about the format are gone, and usually the traces of tampering – if any – have been covered by multiple steps of compression and resizing. So you know easily that the picture is not an original, but it’s very difficult to rely on pixel statistics to evaluate possible tampering at the visual level.

Here’s what the US evidence codes say about authentication (there are variations in other countries, but the basic concept holds):

  • It starts with the person submitting the item. They (attorney, witness, etc.) swear / affirm that the image accurately depicts what it’s supposed to depict – that it’s a contextually accurate representation of what’s at issue.
  • This process of swearing / affirming comes with a bit of jeopardy. One swears “under penalty of perjury.” Thus, the burden is on the person submitting the item to be absolutely sure the item is contextually accurate and not “Photoshopped” to change the context. If they’re proven to have committed perjury, there’s fines / fees and potentially jail time involved.
  • The person submits the file to support a claim. They swear / affirm, under penalty of perjury, that the file is authentic and accurately depicts the context of the claim.

Then, someone else cries foul. Someone else claims that the file has been altered in a specific way – item(s) deleted / added – scene cropped – etc.

It’s this specific allegation of forgery that is needed to test the claims. If there is no specific claim, then one is engaged in a “blind” authentication (attempting to prove a negative). Continue reading

Exposing fraudulent digital images

As a predominantly visual species, we tend to believe what we see. Throughout human evolution, our primary sense of sight has allowed us to analyse primeval threats. We are genetically hardwired to process and trust what our eyes tell us.

This innate hardwiring means that the arrival of digital images has posed a problem for the fraud investigation community. There are many different reasons why someone would want to
maliciously alter a photo to ‘tell a different story’. Although photos can be manipulated with ease, many people still harbour a natural tendency to trust photos as a true and accurate representation of the scene in front of us.

The article published in Computer Fraud & Security describes how images may be altered and the techniques and processes we can use to spot photos that have been modified. With the right tools and training, exposing doctored images in fraud investigations is now not only financially and technically viable, but urgently necessary.

Read the full article here

Experimental validation of Amped Authenticate’s Camera Identification filter

We tested the latest implementation (Build 8782) of PRNU-based Camera Identification and Tampering Localization on a “base dataset” of 10.069 images, coming from 29 devices (listed in the table below). We split the dataset in two:
– Reference set: 1450 images (50 per device) were used for CRP estimation
– Test set: 8619 images were used for testing. On average, each device was tested against approximately 150 matching images and approximately 150 non-matching images.

It is important to understand that, in most cases, we could not control the image creation process. This means that images may have been captured using digital zoom or at resolutions different than the default one, which makes PRNU analysis ineffective. Making use of EXIF metadata, we could filter out these images from the Reference set. However, we chose not to filter out such images from the Test set: we prefer showing results that are closer to real-world cases, rather than tricking the dataset to obtain 100% performance.

Using the above base dataset, we carried out several experiments:
– Experiment 1) testing the system on images “as they are”
– Experiment 2) camera identification in presence or rotation, resize and JPEG re-compression
– Experiment 3) camera identification in presence of cropping, rotation and JPEG re-compression
– Experiment 4) discriminating devices of the same model
– Experiment 5) investigating the impact of the number of images used for CRP computation.

Continue reading

Trust? Can you really trust and image?

Some time ago, two images featured prominently in the initial reporting of Hurricane Harvey. The first was of a shark swimming along the Houston freeway. The second showed several airplanes virtually underwater at what was claimed to be Houston airport. These iconic images were circulated widely on Twitter and were featured on mainstream national media such as Fox News. There was just one small problem. Neither of them were real!

This situation prompts an important question. If this behaviour is widespread on social and traditional media then how do we know it isn’t also affecting police and court investigations? After all, if members of the public are prepared to manipulate images for the sake of a few likes and retweets, what will they be prepared to resort to when the stakes are much higher?

Read the full article published on Police Life.

PRNU-based Camera Identification in Amped Authenticate

Source device identification is a key task in digital image investigation. The goal is to link a digital image to the specific device that captured it, just like they do with bullets fired by a specific gun (indeed, image source device identification is also known as “image ballistics”).

The analysis of Photo Response Non-Uniformity (PRNU) noise is considered the prominent approach to accomplish this task. PRNU is a specific kind of noise introduced by the CMOS/CCD sensor of the camera and is considered to be unique to each sensor. Being a multiplicative noise, it cannot be effectively eliminated through internal processing, so it remains hidden in pixels, even after JPEG compression.

In order to test if an image comes from a given camera, first, we need to estimate the Camera Reference Pattern (CRP), characterizing the device. This is done by extracting the PRNU noise from many images captured by the camera and “averaging” it (let’s not dive too deep into the details). The reason for using several images is to get a more reliable estimate of the CRP, since separating PRNU noise from image content is not a trivial task, and we want to retain PRNU noise only.

After the CRP is computed and stored, we can extract the PRNU noise from a test image and “compare” it to the CRP: if the resulting value is over a given threshold, we say the image is compatible with the camera.

Camera identification through PRNU analysis has been part of Amped Authenticate for quite some time. However, many of our users told us that the filter was hard to configure, and results were not easy to interpret. So, since the end of last year, a new implementation of the algorithm was added (Authenticate Build 8782). The new features included:

Advanced image pre-processing during training
In order to lower false alarms probability, we implemented new filtering algorithms to remove artifacts that are not discriminative, something that is common with most digital cameras (e.g., artifacts due to Color Filter Array demosaicking interpolation).

Continue reading

HEIF Image Files Forensics: Authentication Apocalypse?

If you follow the news from Apple you may have heard that the latest iOS 11 introduces new image and video formats.

More specifically, videos in H.264 (MPEG-4 AVC) are replaced by H.265(HEVC) and photos in JPEG are replaced by the HEIF format.

Files in HEIF format have the extension “.heic” and contain HEVC encoded photos. In a nutshell, a HEIF file is more or less like a single frame encoded H.265 video. Here there is a nice introduction. And, if you want to go more in depth, here there is some more technical documentation.

For people like us, that have been working for years on image authenticity exploiting the various characteristics of the JPEG formats and various effects which happen when you resave a JPEG into another JPEG, this is pretty big – and somewhat worrying – news.

If you want to do image forensics in the real world – not in academia, where the constraints are usually quite different – it means that the vast majority of images you will work with will be compressed in the JPEG format. A lot of filters in Amped Authenticate actually work only on JPEG files, because that’s the most common case. On the contrary, a lot of the algorithms published in journals are almost useless in practical scenarios since their performances drop dramatically when the image is compressed.

JPEG has been on the market for ages, and many tried to replace it with something better, with formats like JPEG 2000 and, more recently, Google WebP. However, with the decreasing costs of storage and bandwidth and the universal adoption of JPEG, it has been impossible to displace. In contrast, video formats and codecs have seen a very rapid progression at the same time, since storage and bandwidth for video is always an issue.

I think this time will be different, for better or worse, since when Apple introduces radical changes, the industry normally follows. This means a lot of work for those of us working on the analysis of image files. Nowadays the majority of pictures are done on a mobile device, and a good part of them are Apple devices so the impact cannot be neglected.

If the HEIC format becomes the new standard, many of the widely used algorithms must be heavily modified or replaced. Don’t hope to save many of those. After all, despite what some are saying, most of the image authentication and tampering detection algorithms don’t work on videos at all. The exception is having a Motion JPEG video modified and resaved as another Motion JPEG video. But that’s a very rare case, and most times the quality will be so low that it will be impossible to use them anyways.

Now let’s see what the situation is like in practice. Continue reading