Recognize deepfakes: This is how you outsmart the computer


Recognize deepfakes: This is how you outsmart the computer

Although deepfakes, faceswaps and computer-generated images are getting better, luckily there are still ways to recognize them. In addition, detection software offers a helping hand, because if an algorithm itself can generate fake images, another algorithm can be trained to detect it again. A lesson in recognizing deepfakes.

The most important skill to recognize a deepfake video (or photo) is to use common sense. The worst thing you can do is take unconfirmed images as true and immediately share them on social media. You may be spreading fake news with this, with all the consequences that entails. Do not immediately take everything for granted, but look for confirmation that what (or who) can be seen in the images is actually correct, especially if it is unusual.

Is it conceivable that someone would do or say this? Is there anyone who was there and can confirm this? Is the source where the image is located or who shares the image reliable? Is there an image from a different point of view? Are there fact checks from independents that confirm that the image is likely to be real? Unfortunately, we live in an era where we can no longer immediately accept everything as true. Good independent journalism is becoming increasingly important.

Spotting mistakes

Although deepfakes are getting better, they sometimes still contain visible errors. Pay special attention to the face while watching, as faceswaps are the most common videos. Do the eyes, eyebrows and mouth move naturally? Does the skin look too smooth or too wrinkly? Does the age match the hair? Does the shadow and light on the face look realistic? Does part of the image stutter every now and then?

Pay particular attention when a head moves or turns left or right, because those are the times when merging faces often leads to visible errors. For a faceswap, a situation that is as static as possible with a frontal view is best. A person then looks straight into the camera and moves little.

It is easy to place a person in front of a different background, but this is often visible in the image. For example, because the person in question is sharper or less sharp than the background, or because a contour is visible around it. Finally, it makes sense to pay attention to the blink of the eyes. For example, with moderate-quality deepfakes, someone blinks less often than normal. If the quality of the image is poor, the alarm bells should ring anyway.

For photos that, unlike faceswaps, are generated entirely by an algorithm, a number of landmarks match. Photos with a background can sometimes appear to be too random and not clearly show a recognizable landscape or interior. Such a background often looks artificially sharp or contains illogical parts that are alternately sharp and blurry.

It also sometimes happens that an AI comes up with a second person or something like a hand that is oddly positioned. It also often goes wrong with hair, which, for example, takes over a color from the background or is not correct in terms of sharpness. Even backgrounds are the easiest to create for an AI, but that is precisely why they are suspect – most photos are not taken in a sterile photo studio… although they do happen and that makes recognition very difficult.

Enable tools

There is a (technical) solution for every problem. In recent years, more and more tools have appeared that can detect whether an image is authentic or not. In other words: software tries to detect whether other software has contributed to the creation or manipulation of the image.

An example is the ‘Coalition for Content Provenance and Authenticity’ (C2PA): an alliance of Adobe, ARM, Intel, Microsoft and Truepic and various media companies such as the BBC and The New York Times. Through ‘Project Origin’ media makers can detect images that are real or fake. Such software cannot be downloaded publicly and often has a payment model.

Other companies active in this area are Sensity and Deepfact from 3DUniversum. The latter company was founded by Prof. dr. Theo Gevers of the University of Amsterdam, who teaches Deep Learning and Artificial Intelligence there. He recently said the following about this: “We analyze about 50,000 points on a face and the software then shows which parts correspond to reality and which do not. The tricky part is that there are many different methods of creating deepfake videos and every year two or three new ones are added that are even more sophisticated.”

The software looks for various deficiencies, such as eyes that don’t blink realistically, different colored eyes, earrings that aren’t symmetrical, and a background that doesn’t match. The algorithm is fed with a fake image dataset that includes more than 10 million images. The algorithm also explains why an image is considered fake, making it easier for people to spot a deepfake (without the need for an algorithm).

take the quiz

3DUniversum has developed a quiz with which you can test whether you recognize fake images. You are presented with twelve faces that you have to mock if they are real. It starts with a single image and ends with a series of four photos of which only one image is real at a time. After answering the question see which one it is.

Not only do you learn what to pay attention to with computer-generated portrait photos, you also see how difficult it is to distinguish real from fake. Of the more than 32,000 quizzes performed at the time of writing, less than 0.1% got everything right.

.

Recent Articles

Related Stories