Category Archives: Authenticate

Proving a negative

I have a dear old friend who is a brilliant photographer and artist. Years ago, when he was teaching at the Art Center College of Design in Pasadena, CA, he would occasionally ask me to substitute for him in class as he travelled the world to take photos. He would introduce me to the class as the person at the LAPD who authenticates digital media – the guy who inspects images for evidence of Photoshopping. Then, he’d say something to the effect that I would be judging their composites, so they’d better be good enough to fool me.

Last year, I wrote a bit about my experiences authenticating files for the City / County of Los Angeles. Today, I want to address a common misconception about authentication – proving a negative.

So many requests for authentication begin with the statement, “tell me if it’s been Photoshopped.” This request for a “blind authentication” asks the analyst to prove a negative. It’s a very tough request to fulfill.

In general, this could be obtained with a certain degree of certainty if the image is verified to be an original from a certain device, with no signs of recapture and, possibly verifying the consistency on the sensor noise pattern (PRNU).

However, it is very common nowadays to work on images that are not originals but have been shared on the web or through social media, usually multiple consecutive times. This implies that metadata and other information about the format are gone, and usually the traces of tampering – if any – have been covered by multiple steps of compression and resizing. So you know easily that the picture is not an original, but it’s very difficult to rely on pixel statistics to evaluate possible tampering at the visual level.

Here’s what the US evidence codes say about authentication (there are variations in other countries, but the basic concept holds):

  • It starts with the person submitting the item. They (attorney, witness, etc.) swear / affirm that the image accurately depicts what it’s supposed to depict – that it’s a contextually accurate representation of what’s at issue.
  • This process of swearing / affirming comes with a bit of jeopardy. One swears “under penalty of perjury.” Thus, the burden is on the person submitting the item to be absolutely sure the item is contextually accurate and not “Photoshopped” to change the context. If they’re proven to have committed perjury, there’s fines / fees and potentially jail time involved.
  • The person submits the file to support a claim. They swear / affirm, under penalty of perjury, that the file is authentic and accurately depicts the context of the claim.

Then, someone else cries foul. Someone else claims that the file has been altered in a specific way – item(s) deleted / added – scene cropped – etc.

It’s this specific allegation of forgery that is needed to test the claims. If there is no specific claim, then one is engaged in a “blind” authentication (attempting to prove a negative). Continue reading

Exposing fraudulent digital images

As a predominantly visual species, we tend to believe what we see. Throughout human evolution, our primary sense of sight has allowed us to analyse primeval threats. We are genetically hardwired to process and trust what our eyes tell us.

This innate hardwiring means that the arrival of digital images has posed a problem for the fraud investigation community. There are many different reasons why someone would want to
maliciously alter a photo to ‘tell a different story’. Although photos can be manipulated with ease, many people still harbour a natural tendency to trust photos as a true and accurate representation of the scene in front of us.

The article published in Computer Fraud & Security describes how images may be altered and the techniques and processes we can use to spot photos that have been modified. With the right tools and training, exposing doctored images in fraud investigations is now not only financially and technically viable, but urgently necessary.

Read the full article here

Experimental validation of Amped Authenticate’s Camera Identification filter

We tested the latest implementation (Build 8782) of PRNU-based Camera Identification and Tampering Localization on a “base dataset” of 10.069 images, coming from 29 devices (listed in the table below). We split the dataset in two:
– Reference set: 1450 images (50 per device) were used for CRP estimation
– Test set: 8619 images were used for testing. On average, each device was tested against approximately 150 matching images and approximately 150 non-matching images.

It is important to understand that, in most cases, we could not control the image creation process. This means that images may have been captured using digital zoom or at resolutions different than the default one, which makes PRNU analysis ineffective. Making use of EXIF metadata, we could filter out these images from the Reference set. However, we chose not to filter out such images from the Test set: we prefer showing results that are closer to real-world cases, rather than tricking the dataset to obtain 100% performance.

Using the above base dataset, we carried out several experiments:
– Experiment 1) testing the system on images “as they are”
– Experiment 2) camera identification in presence or rotation, resize and JPEG re-compression
– Experiment 3) camera identification in presence of cropping, rotation and JPEG re-compression
– Experiment 4) discriminating devices of the same model
– Experiment 5) investigating the impact of the number of images used for CRP computation.

Continue reading

Trust? Can you really trust and image?

Some time ago, two images featured prominently in the initial reporting of Hurricane Harvey. The first was of a shark swimming along the Houston freeway. The second showed several airplanes virtually underwater at what was claimed to be Houston airport. These iconic images were circulated widely on Twitter and were featured on mainstream national media such as Fox News. There was just one small problem. Neither of them were real!

This situation prompts an important question. If this behaviour is widespread on social and traditional media then how do we know it isn’t also affecting police and court investigations? After all, if members of the public are prepared to manipulate images for the sake of a few likes and retweets, what will they be prepared to resort to when the stakes are much higher?

Read the full article published on Police Life.

PRNU-based Camera Identification in Amped Authenticate

Source device identification is a key task in digital image investigation. The goal is to link a digital image to the specific device that captured it, just like they do with bullets fired by a specific gun (indeed, image source device identification is also known as “image ballistics”).

The analysis of Photo Response Non-Uniformity (PRNU) noise is considered the prominent approach to accomplish this task. PRNU is a specific kind of noise introduced by the CMOS/CCD sensor of the camera and is considered to be unique to each sensor. Being a multiplicative noise, it cannot be effectively eliminated through internal processing, so it remains hidden in pixels, even after JPEG compression.

In order to test if an image comes from a given camera, first, we need to estimate the Camera Reference Pattern (CRP), characterizing the device. This is done by extracting the PRNU noise from many images captured by the camera and “averaging” it (let’s not dive too deep into the details). The reason for using several images is to get a more reliable estimate of the CRP, since separating PRNU noise from image content is not a trivial task, and we want to retain PRNU noise only.

After the CRP is computed and stored, we can extract the PRNU noise from a test image and “compare” it to the CRP: if the resulting value is over a given threshold, we say the image is compatible with the camera.

Camera identification through PRNU analysis has been part of Amped Authenticate for quite some time. However, many of our users told us that the filter was hard to configure, and results were not easy to interpret. So, since the end of last year, a new implementation of the algorithm was added (Authenticate Build 8782). The new features included:

Advanced image pre-processing during training
In order to lower false alarms probability, we implemented new filtering algorithms to remove artifacts that are not discriminative, something that is common with most digital cameras (e.g., artifacts due to Color Filter Array demosaicking interpolation).

Continue reading

HEIF Image Files Forensics: Authentication Apocalypse?

If you follow the news from Apple you may have heard that the latest iOS 11 introduces new image and video formats.

More specifically, videos in H.264 (MPEG-4 AVC) are replaced by H.265(HEVC) and photos in JPEG are replaced by the HEIF format.

Files in HEIF format have the extension “.heic” and contain HEVC encoded photos. In a nutshell, a HEIF file is more or less like a single frame encoded H.265 video. Here there is a nice introduction. And, if you want to go more in depth, here there is some more technical documentation.

For people like us, that have been working for years on image authenticity exploiting the various characteristics of the JPEG formats and various effects which happen when you resave a JPEG into another JPEG, this is pretty big – and somewhat worrying – news.

If you want to do image forensics in the real world – not in academia, where the constraints are usually quite different – it means that the vast majority of images you will work with will be compressed in the JPEG format. A lot of filters in Amped Authenticate actually work only on JPEG files, because that’s the most common case. On the contrary, a lot of the algorithms published in journals are almost useless in practical scenarios since their performances drop dramatically when the image is compressed.

JPEG has been on the market for ages, and many tried to replace it with something better, with formats like JPEG 2000 and, more recently, Google WebP. However, with the decreasing costs of storage and bandwidth and the universal adoption of JPEG, it has been impossible to displace. In contrast, video formats and codecs have seen a very rapid progression at the same time, since storage and bandwidth for video is always an issue.

I think this time will be different, for better or worse, since when Apple introduces radical changes, the industry normally follows. This means a lot of work for those of us working on the analysis of image files. Nowadays the majority of pictures are done on a mobile device, and a good part of them are Apple devices so the impact cannot be neglected.

If the HEIC format becomes the new standard, many of the widely used algorithms must be heavily modified or replaced. Don’t hope to save many of those. After all, despite what some are saying, most of the image authentication and tampering detection algorithms don’t work on videos at all. The exception is having a Motion JPEG video modified and resaved as another Motion JPEG video. But that’s a very rare case, and most times the quality will be so low that it will be impossible to use them anyways.

Now let’s see what the situation is like in practice. Continue reading

Cowboys versus Bureaucrats: Attitude and Tools

There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.

There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.

There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.

What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:

  • To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
  • Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
  • He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
  • He will use whatever result “looks good” without questioning its validity or reliability.
  • If asked to document and repeat his work in detail he’ll be in deep trouble.
  • If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
  • When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
  • When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
  • When sending around results to colleagues he just pastes processed snapshots into Word documents.
  • When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.

Continue reading

Can you trust what you show in Court?

If you present an object, an image, or a story to a courtroom, you must be able to trust that it is accurate.

How then, do you trust an image – a digital photograph, a snapshot in time of an object, a person or a scene? Do you trust what the photographer says? Or do you check it? Do you attempt to identify any signs of manipulation that could cast doubt on the weight of the evidence?

How many members of the public are aware of the Digital Imaging Procedure? What about the guidance surrounding computer based information, which includes digital images and video? What about the person that is receiving that file? Perhaps the investigating officer. Are they aware of the importance of image authentication?

Is the Criminal Justice System naive to believe that fake images do not end up being displayed in court and presented as truth? Even if it is a rarity now, we need to think of the future. To start with, we must ask ourselves, “Can we rely on the image we see before us? Has it been authenticated?”

Read the article published by The Barrister magazine to learn about the importance of authenticating images before submitting them as evidence.

Altered images: The challenge of identifying fake photographs

Fake photographs have been around for almost as long as the camera, but in a digital age of photography, the ability to alter images has never been easier. EU Forensic Video Expert David Spreadborough examines the current challenges surrounding authenticating images.

Thanks to the latest administration in the USA, the term ‘fake news’ has become a popular method of explanation to an event created within social media. The problem is that news agencies and websites find these invented stories and then republish, therefore causing the spread and proliferation of the fake story.

You may have seen this image recently during the G20 meeting of world leaders. Looks like a serious conversation. It may have been, but Putin was never there. Find a picture, create a story, ‘Photoshop’ the picture, then tweet it. The fake news cycle then starts. The more relevant the story, the quicker the spread.

The modification of images to tell a different story is nothing new, it’s been happening since the early days of photography. A popular myth is that it’s a problem caused by the digital age. An example is the photo of The Cottingley Fairies. Although I accept that digitisation has made things a lot easier and a lot more convincing.

Over the past few months, entwined between the ‘fake news’ stories have been several reports of manipulated images appearing in academic studies. It is easy to understand how people can be swayed to change a couple of images to validate a piece of research if it assists in the success of a financial grant. Images in documents used to prove qualifications and images proving the existence of large, wild cats in southern England have also all recently been found to be fake, or maliciously manipulated. When someone fakes an image, it is simply to present an event in a different way than the original moment in time. Continue reading

Amped featured in Fraud Intelligence

Alan Osborn, from Fraud Intelligence, writes about the strong interest shown at the Forensics Europe Expo, by the Trieste, Italy-based company Amped Software, whose technology enables the analysis, enhancement, and authentication of images and video. Amped told FI how it’s very easy to alter an image and change the context and the meaning of that image, but hiding the artifacts that are left behind is much harder.

Click here for the PDF version of the published article.