Skip to main content

Deepfake Detection Quiz Results – Can You Spot Deepfakes with Your Eyes?

Reading time: 3 min

We conducted a deepfake detection quiz with both users and forensic experts to assess their ability to distinguish between real images and fake ones with the naked eye. Check the results below.

A collage of various images illustrating the theme "Can you Detect Deepfakes?" including a modern interior, a portrait of a woman, a motorcycle, a snowy landscape, an indoor space, and an image of dolphins.

AI-generated images are becoming disturbingly realistic, making it increasingly difficult to tell real photos from fake ones with the naked eye.

That’s why we created a quiz featuring 20 images, some real, some synthetically generated, and asked both everyday users and forensic experts to guess which were fake.

There was only one rule: use your eyes only!

The test was conducted on users’ smartphones to simulate a common scenario: scrolling through images on a social network. 
After one year, we collected more than 800 responses! Now it’s time to look at the results.

Results in a Nutshell

The test was simple: for each image, participants guessed “Real” or “Fake”. Each correct answer earned one point, for a possible score ranging from 0 to 20.

Below is the histogram of the collected responses, showing how many users achieved each score.

A histogram illustrating the distribution of scores from a quiz on identifying real and AI-generated images, showing the average score of 11.25 out of 20 and number of respondents for each score.

It’s not very surprising to see that the average score was around 11 out of 20, with a mean accuracy of about 55%. Unfortunately, that’s not very encouraging. Such performance could almost be matched by simply flipping a coin and guessing “Fake” every time it lands on tails!

These results suggest that we’re not particularly skilled at spotting AI-generated images with the naked eye. Something in these synthetic visuals is clearly tricking our perception.

Another interesting finding is that over half of the participants scored between 9 and 12, corresponding to an accuracy of roughly 45% to 60%. In other words, a large portion of users performed no better than random guessing.

It is worth noticing that there are some images that most users guessed wrong. Here are the two most challenging ones:

A photograph of a grey bag placed on a damaged wooden bench surrounded by fallen leaves.

  • On the left side, the “bokeh mode” was used to photograph a bag on a damaged bench. The result is undeniably awkward: a blurred background paired with unexpected content. Only 18% of users identified it correctly. Perhaps that’s because it’s rare to see a bench like this in real life. In this case, the unrealistic scene led users to believe the photo was AI-generated.

  • On the right side, we see a synthetic image of two swimming dolphins. Only 20% of participants recognized it as artificial, while the rest were likely deceived by the assumption that AI-generated images always appear highly detailed and realistic.

Conclusions

The hard truth is that we can’t really rely on our eyes to detect deepfakes. More than 800 participants, with diverse technical backgrounds, performed no better than a coin flip at identifying AI-generated images.

It’s also worth noting that the test used images created over a year ago. Since then, the quality of synthetic images has improved dramatically, making the challenge even more difficult today.

We also observed some consistent patterns in user behavior. Many participants seemed to believe that AI can produce only high-quality, polished images, overlooking the fact that modern tools can now generate low-resolution CCTV footage or vintage-style images as well.

Similarly, users tended to label as fake any image that appeared unusual or unlikely in real life, suggesting that content expectations play a strong role in their judgments.

Curious to learn more about traces left by deepfakes?
You can definitely start from our blog post: “10 ways to detect deepfakes”. Once you get them all, you can get a more scientific approach by exploring a deeper analysis in the Deepfake Forensics.


 Massimo Iuliani

Massimo Iuliani has been a member of the Amped Software team as a Forensic Analyst and Trainer since September 2023. In 2017, Massimo achieved a Ph.D. in Mathematics focused on image and video authentication topics. Before coming to Amped, he had worked for ten years within the Dept. of Information Engineering of the University of Florence, on research projects funded by the European Commission and DARPA. All projects were related to the authentication and reverse engineering of multimedia content. He co-authored over 20 papers in peer-reviewed journals and conferences on multimedia forensics topics. He has also provided testimony in court as an expert witness in Italy for the analysis and interpretation of image, video, and audio evidence. Outside the forensic environment, Massimo is a music lover and producer. He is always looking for new links between sounds and feelings. He supports musicians and songwriters in finding their true sound and bringing their music to light.

Subscribe to our Blog

Receive an email notification when a new blog post is published and don’t miss out on our latest updates, how-tos, case studies and much more content!