Category Archives: Authenticate

PRNU-based Camera Identification in Amped Authenticate

Source device identification is a key task in digital image investigation. The goal is to link a digital image to the specific device that captured it, just like they do with bullets fired by a specific gun (indeed, image source device identification is also known as “image ballistics”).

The analysis of Photo Response Non-Uniformity (PRNU) noise is considered the prominent approach to accomplish this task. PRNU is a specific kind of noise introduced by the CMOS/CCD sensor of the camera and is considered to be unique to each sensor. Being a multiplicative noise, it cannot be effectively eliminated through internal processing, so it remains hidden in pixels, even after JPEG compression.

In order to test if an image comes from a given camera, first, we need to estimate the Camera Reference Pattern (CRP), characterizing the device. This is done by extracting the PRNU noise from many images captured by the camera and “averaging” it (let’s not dive too deep into the details). The reason for using several images is to get a more reliable estimate of the CRP, since separating PRNU noise from image content is not a trivial task, and we want to retain PRNU noise only.

After the CRP is computed and stored, we can extract the PRNU noise from a test image and “compare” it to the CRP: if the resulting value is over a given threshold, we say the image is compatible with the camera.

Camera identification through PRNU analysis has been part of Amped Authenticate for quite some time. However, many of our users told us that the filter was hard to configure, and results were not easy to interpret. So, since the end of last year, a new implementation of the algorithm was added (Authenticate Build 8782). The new features included:

Advanced image pre-processing during training
In order to lower false alarms probability, we implemented new filtering algorithms to remove artifacts that are not discriminative, something that is common with most digital cameras (e.g., artifacts due to Color Filter Array demosaicking interpolation).

Continue reading

HEIF Image Files Forensics: Authentication Apocalypse?

If you follow the news from Apple you may have heard that the latest iOS 11 introduces new image and video formats.

More specifically, videos in H.264 (MPEG-4 AVC) are replaced by H.265(HEVC) and photos in JPEG are replaced by the HEIF format.

Files in HEIF format have the extension “.heic” and contain HEVC encoded photos. In a nutshell, a HEIF file is more or less like a single frame encoded H.265 video. Here there is a nice introduction. And, if you want to go more in depth, here there is some more technical documentation.

For people like us, that have been working for years on image authenticity exploiting the various characteristics of the JPEG formats and various effects which happen when you resave a JPEG into another JPEG, this is pretty big – and somewhat worrying – news.

If you want to do image forensics in the real world – not in academia, where the constraints are usually quite different – it means that the vast majority of images you will work with will be compressed in the JPEG format. A lot of filters in Amped Authenticate actually work only on JPEG files, because that’s the most common case. On the contrary, a lot of the algorithms published in journals are almost useless in practical scenarios since their performances drop dramatically when the image is compressed.

JPEG has been on the market for ages, and many tried to replace it with something better, with formats like JPEG 2000 and, more recently, Google WebP. However, with the decreasing costs of storage and bandwidth and the universal adoption of JPEG, it has been impossible to displace. In contrast, video formats and codecs have seen a very rapid progression at the same time, since storage and bandwidth for video is always an issue.

I think this time will be different, for better or worse, since when Apple introduces radical changes, the industry normally follows. This means a lot of work for those of us working on the analysis of image files. Nowadays the majority of pictures are done on a mobile device, and a good part of them are Apple devices so the impact cannot be neglected.

If the HEIC format becomes the new standard, many of the widely used algorithms must be heavily modified or replaced. Don’t hope to save many of those. After all, despite what some are saying, most of the image authentication and tampering detection algorithms don’t work on videos at all. The exception is having a Motion JPEG video modified and resaved as another Motion JPEG video. But that’s a very rare case, and most times the quality will be so low that it will be impossible to use them anyways.

Now let’s see what the situation is like in practice. Continue reading

Cowboys versus Bureaucrats: Attitude and Tools

There were a couple of interesting discussions this week which prompted me to write this blog post. One is related to the scientific methods used during the analysis of images and videos, the other relates to the tools used.

There was a pretty interesting and detailed conversation that happened on an industry specific mailing list where a few experts debated about the scientific and forensic acceptability of different methodologies. This discussion began with the reliability of speed determination from CCTV video but then evolved into a more general discussion.

There are two extreme approaches to how forensic video analysts work: let’s call one group the cowboys and the other the bureaucrats. I’ve seen both kinds of “experts” in my career, and – luckily – many different variations across this broad spectrum.

What is a cowboy? A cowboy is an analyst driven only by the immediate result, with no concern at all for the proper forensic procedure, the reliability of his methods and proper error estimation. Typical things the cowboy does:

  • To convert a proprietary video, he just does a screen capture maximizing the player on the screen, without being concerned about missing or duplicated frames.
  • Instead of analyzing the video and identify the issues to correct, he just adds filters randomly and tweaks the parameters by eye without any scientific methodology behind it.
  • He uses whatever tool may be needed for the job, recompressing images and videos multiple times, using a mix of open source, free tools, commercial tools, plugins, more or less legitimate stuff, maybe some Matlab or Python script if he has the technical knowledge.
  • He will use whatever result “looks good” without questioning its validity or reliability.
  • If asked to document and repeat his work in detail he’ll be in deep trouble.
  • If asked the reason and validity of choosing a specific algorithm or procedure, he will say “I’ve always done it like this, and nobody ever complained”.
  • When asked to improve a license plate he will spell out the digits even if they are barely recognizable on a single P frame and probably are just the result of compression artifacts amplified by postprocessing.
  • When asked to identify a person, he will be able to do so with absolute certainty even when comparing a low-quality CCTV snapshot with a mugshot sent by fax.
  • When sending around results to colleagues he just pastes processed snapshots into Word documents.
  • When asked to authenticate an image, he just checks if the Camera Make and Model is present in the metadata.

Continue reading

Can you trust what you show in Court?

If you present an object, an image, or a story to a courtroom, you must be able to trust that it is accurate.

How then, do you trust an image – a digital photograph, a snapshot in time of an object, a person or a scene? Do you trust what the photographer says? Or do you check it? Do you attempt to identify any signs of manipulation that could cast doubt on the weight of the evidence?

How many members of the public are aware of the Digital Imaging Procedure? What about the guidance surrounding computer based information, which includes digital images and video? What about the person that is receiving that file? Perhaps the investigating officer. Are they aware of the importance of image authentication?

Is the Criminal Justice System naive to believe that fake images do not end up being displayed in court and presented as truth? Even if it is a rarity now, we need to think of the future. To start with, we must ask ourselves, “Can we rely on the image we see before us? Has it been authenticated?”

Read the article published by The Barrister magazine to learn about the importance of authenticating images before submitting them as evidence.

Altered images: The challenge of identifying fake photographs

Fake photographs have been around for almost as long as the camera, but in a digital age of photography, the ability to alter images has never been easier. EU Forensic Video Expert David Spreadborough examines the current challenges surrounding authenticating images.

Thanks to the latest administration in the USA, the term ‘fake news’ has become a popular method of explanation to an event created within social media. The problem is that news agencies and websites find these invented stories and then republish, therefore causing the spread and proliferation of the fake story.

You may have seen this image recently during the G20 meeting of world leaders. Looks like a serious conversation. It may have been, but Putin was never there. Find a picture, create a story, ‘Photoshop’ the picture, then tweet it. The fake news cycle then starts. The more relevant the story, the quicker the spread.

The modification of images to tell a different story is nothing new, it’s been happening since the early days of photography. A popular myth is that it’s a problem caused by the digital age. An example is the photo of The Cottingley Fairies. Although I accept that digitisation has made things a lot easier and a lot more convincing.

Over the past few months, entwined between the ‘fake news’ stories have been several reports of manipulated images appearing in academic studies. It is easy to understand how people can be swayed to change a couple of images to validate a piece of research if it assists in the success of a financial grant. Images in documents used to prove qualifications and images proving the existence of large, wild cats in southern England have also all recently been found to be fake, or maliciously manipulated. When someone fakes an image, it is simply to present an event in a different way than the original moment in time. Continue reading

Amped Authenticate Update 9446: CameraForensics Integration, New Quantization Tables Database and Much More

We’ve just launched some pretty important additions to Amped Authenticate. Not only have we integrated it with CameraForensics, but we have also made some major improvements to the quantization tables in addition to many other internal improvements. Read below for the details.

CameraForensics Integration

The main purpose of Amped Authenticate is to verify if a picture is an original coming from a specific device or if it’s the result of manipulation using image editing software. One of the main tests to verify the file integrity is to acquire the camera that is assumed to be the one that has generated the photo (or at least the same model) and verify if the format is compatible with the file under analysis.

While this sounds easy in practice, many devices have so many different settings and because of this it can be challenging to recreate the same conditions. Furthermore, the camera is often not available.

What if we look on the web for pictures coming from a specific device? While we cannot, in general, guarantee the integrity of files downloaded from the web, we can triage them pretty easily and do a comparison with the image under analysis.

But how do you search for images on the web in an efficient manner? We have had “Search for Images from Same Camera Model…” in Authenticate for quite some time. It allows you to search on Google Images and Flickr, but the search is not always optimal, as it has to apply different workarounds to work efficiently in a forensic setting.

So, what if someone built a database of pictures on the web, optimized for investigative use, enabling you to instantly search for images coming from a specific device and with specific features such as resolution and JPEG quantization tables? Turns out the guys at CameraForensics did exactly this (and much more) and we partnered with them to provide a streamlined experience.

Let’s see how it works. Continue reading

Understanding how online services change images

During a recent workshop on image authentication, I ran a few practical sessions. One concentrated on the changes that online services and social media platforms make to the images that we upload. It turned out to be an interesting experiment that has had some structured research over the past few years.

These are excellent starting resources when developing any internal Standard Operating Procedure:

A Classification Engine for Image Ballistics of Social Data: https://arxiv.org/abs/1610.06347

A Forensic Analysis of Images on Online Social Networks: http://ieeexplore.ieee.org/abstract/document/6132891/

Why is this important?

Continue reading

Are Amped Software products validated or certified officially for forensic use?

We work in the field of forensic video analysis, which is generally intended as the analysis of the images themselves and their context in a legal setting. For this reason, our customers often ask us if our products are valid for court use and if they have been validated and certified. We have written this post as an answer to the most common questions related to this topic.

You can also download this as a PDF document here


What are the scientific foundations of Amped Software products?

All the processes implemented in our software follow the principles of scientific methodology. Any process follows these basic principles:

  1. Accuracy (Reliability): our tools and training program help users avoid processing errors caused by the implementation of an inappropriate tool or workflow and help mitigate the impact of human factors / bias.
  2. Repeatability: the same process, executed by the same user at a different time, must lead to the same result. The project format in Amped FIVE, for example, does not save any image data. Every time a project is reopened, all the processing happens again starting from the original data. In the event that a project file is lost or as a part of a validation or other test scenario, the same user can repeat the steps and settings, guided by the tool’s report, and achieve the same results.
  3. Reproducibility: another user with the proper competency, should be able to reproduce the same results. Amped FIVE generates a complete report detailing all the steps of the processing, the settings / parameters applied, a description of the algorithms employed in the processing and the scientific references for those algorithms (when applicable). In this way, another user, with a different tool set or by implementing the same algorithms, should be able to reproduce the same results. Given the huge number of implementation details and possible differences, it is not expected to produce a bit by bit copy of the results, but only to produce an image of similar informative content.

Additionally, we apply strict due diligence on the applicability of the algorithms for the forensic environment. Not every algorithm is, in fact, properly applicable in a forensic science setting. We cannot use algorithms which have a random component because they would not be reproducible and repeatable (when we do, we set a fixed seed for the random number generation) and we cannot use algorithms which “add” external data to the original, for example improving the quality of a face with information added from an average face. All information is derived from the actual evidence file.

We employ algorithms which have been validated by the scientific community through peer review, such as university textbooks, scientific publications, or conference papers. If for some specific task, there are not good enough algorithms available or we need to adapt existing algorithms, we describe the algorithm and attempt to publish them in scientific journals. Continue reading

Is that image authentic?

With digital images, people are starting to ask the question – “is it authentic?”

My first digital camera was probably around 1997/8 – that’s nearly 20 years ago! It was a Canon and stored its tiny images on a CF Card. It was pretty heavy and bulky, but a huge step up from the first Kodak prototypes of the 1970’s.

Those had to store an image onto a cassette tape!

In 1990, a few years before my first adventures into digital imaging, Adobe released Photoshop for the Mac.

Take a look at the digital photography timeline to learn more:

http://www.practicalphotographytips.com/practicalphotographytips/history-of-digital-photography.html

This little trip down memory lane has revealed that for over 25 years, people have been able to easily capture and edit digital images. We have reached a point where high-quality images can be captured quickly, edited, and then shared within a few clicks of a mouse or taps on a screen. It’s no wonder then, that during this digital generation, people have also learned how easy it is to change that picture for unlawful reasons.

You are, most likely, from within the investigative community, so you can probably think of many different reasons why someone would want to, ‘tell a different story’. A digital image can be manipulated to reinforce that story, and up until now, many people have trusted that image as being a true and accurate representation. Continue reading