A new system against deepfakes

A new system against deepfakes

Digital deepfakes can be used to fake videos and images. © Artemis Diana/ iStock

Technical advances have made it ever easier to fake images and videos and create so-called deepfakes. Politicians such as Ukrainian President Volodymyr Zelenskyy are also increasingly affected. Two researchers have now developed a system to specifically expose the fake videos of such celebrities. It is based on adaptive algorithms that learn countless biometric, linguistic and gestural characteristics of the target person from authentic video material - and recognize even small deviations from them. In the test, the systems recognized real recordings of President Zelenskyj with 99.99 percent accuracy and exposed deepfakes.

Every day we look countless people in the face. It is extremely important for our social behavior that we can distinguish and recognize the faces of our fellow human beings. No wonder, then, that we have a well-developed ability to recognize faces and that our brain even has special centers for face recognition: one look is enough and we know who we are looking at – at least normally. However, new digital technologies are no longer making this that easy: Because the person behind the face in a video is not always the person who owns it in real life. With so-called deepfakes, faces are simply exchanged - and people put words in their mouths that they never said.

Increasing risk of deepfakes among politicians

Such deepfakes have already been used in fraud and blackmail attempts, but are also misused for political purposes. For example, in March 2022, shortly after the start of the Ukraine war, a video appeared on the Internet in which Ukrainian President Volodymyr Zelenskyy seemed to be declaring defeat in the war against Russia and surrender. Although this video was soon exposed as a deepfake due to the rather crude audio and video technology, it was only after it had already been shared via social media and even appeared on Ukrainian television. Three months later, another deepfake successfully tricked the mayors of the cities of Berlin, Madrid and Vienna into believing they were speaking in a video conference with Vitali Klitschko, the mayor of Kyiv. "These recent events are just the beginning of a new wave of deepfake attacks on recorded and live video," explain Hany Farid of the University of California at Berkeley and Matyas Bohacek of the Kepler Gymnasium in Prague.

The further the technical possibilities of such deepfakes advance, the more difficult it becomes for our perception to recognize the fakes. Therefore, various technologies are already being used to identify deepfakes. Common methods look for artifacts in the video files, for example, such as those caused by the insertion of incorrect mouth movements or faces. Other methods use artificial intelligence systems to train them to distinguish between fakes. Biometric comparisons are also used to verify the identity of the people depicted. To do this, however, the corresponding characteristics of the original person must first be recorded and precisely evaluated - a relatively complex process up to now.

Learning algorithms and Selenskyj's peculiarities

However, according to Farid and Bohacek, the identity-based approach is worthwhile, especially for highly exposed individuals such as government officials of large nations, and is further facilitated by the abundant video footage of these individuals. "Therefore, when it comes to protecting such world-class politicians, we believe an identity-based approach is the most sensible and robust approach," the researchers state. With the help of an adaptive algorithm, they have developed such a system - and for current events used the Ukrainian President Zelenskyj as a test subject. In order to first identify characteristic features that are particularly suitable for recognition, the scientists analyzed a total of 506 minutes of video recordings that showed Zelenskyj in public speeches, in press conferences and in video messages he made. "According to our experience, at least eight hours of video material are required for such analyzes," explain Farid and Bohacek. The adaptive analysis systems evaluated biometric data of facial and body movements, but also vocal and linguistic peculiarities.

From these analyzes resulted a set of 780 characteristics that characterize the Ukrainian president. Based on these characteristics, algorithms were able to distinguish him from four deepfakes and 250 comparison persons with an accuracy of 99.99 percent. In addition to many idiosyncrasies of little significance, among the identified features of Zelenskyy there were also some that alone contributed more than ten percent to the accuracy of the identification: "The most prominent feature is President Zelenskyy's tendency to gesture with his left arm while his right arm is hanging down at his side,” the researchers report. "That creates a strong correlation between the movement of his right elbow and right shoulder as he moves from side to side." An asymmetry in Zelenskyj's smile also proved meaningful. "Such highly specific correlations may make it difficult for deepfakers to fully capture and reproduce the individual mannerisms in a person's behavior," the researchers said.

Farid and Bohacek therefore consider it useful and promising to specifically generate such test models for important personalities in politics and public life in order to be able to quickly unmask deepfakes. To avoid possible countermeasures by the counterfeiters, they have not made their algorithms publicly available. "We will, however, make our classifiers available to reputable media and government agencies to help them combat disinformation campaigns," the researchers said.

Source: Hany Farid (University of California, Berkeley) and Matyas Bohacek (Johannes-Kepler-Gymnasium, Prague), Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.2216035119

Recent Articles

Related Stories