Researchers have been trying to develop a reliable lie detector for more than 100 years – so far without success. The idea of exposing liars using measurements that are as objective as possible is experiencing new growth thanks to the possibilities of artificial intelligence. Pilot tests are being carried out at some of the EU’s external borders to identify those wishing to enter the country with bad intentions. But researchers warn against such applications. The technology is unreliable and non-transparent – and, like all previous approaches, suffers from a fundamental problem: To date, there is no reliable theoretical basis for the assumption that lies can even be recognized by physical evidence.
When the fairytale character Pinocchio lies, it’s easy for everyone to see: with every lie, his wooden nose gets a little longer. In reality, however, there is currently no reliable evidence that can be used to detect lies. So-called lie detectors measure changes in skin resistance, breathing rate and heartbeat in the assumption that lying causes measurable excitement. Although there is no scientific basis for such tests and numerous misjudgments demonstrate their unreliability, they are still used in court in several US states.
Virtual border guard?
“Outside of books and films, there is no such thing as Pinocchio’s nose,” emphasize psychologists Kristina Suchotzki from the University of Marburg and Matthias Gamer from the University of Würzburg in a recent publication. “There is no strong behavioral evidence to show who is lying and who is telling the truth, and no physiological or neural signature has been identified that clearly indicates deception.”
But regardless of the lack of theoretical basis, new projects using artificial intelligence are intended to expose liars based on external clues. Suchotzki and Gamer cite the EU project iBorderCtrl as an example, which has actually already been tested at the EU external borders of Hungary, Greece and Lithuania. A virtual border officer equipped with artificial intelligence asks those wishing to enter the country several questions about their identity and travel plans – and is supposed to use facial movements transmitted via webcam to detect whether someone is lying or hiding malicious intentions.
Opaque, distorted and unreliable
Suchotzki and gamers view this and similar projects very critically. “Unfortunately, there are several pervasive problems in current AI-powered deception detection research, including lack of explainability and transparency, risk of bias, and deficiencies in theoretical foundation,” they explain. Since the AI is usually like a black box, it is impossible to understand on what basis it arrives at its results. “Even those who developed the algorithms themselves often cannot, at a certain point, explain how a judgment is generated from a given set of input variables,” they write. “This makes it impossible to understand accurate classifications and to work out the reasons for incorrect classifications.”
Depending on which training data was used, systematic distortions can also occur. For example, if real cases in which people were convicted in court are used for training, possible human biases can be transferred to the AI - for example, if people of a certain gender or skin color were found guilty more often. Laboratory data in which test subjects were specifically instructed to lie, on the other hand, may be less transferable to practice.
Understanding mechanisms of deception
In order to avoid unnecessary false-positive assignments, i.e. to prevent innocent people from being classified as liars, according to Suchotzki and Gamer, the respective area of application also depends on the area of use. “Mass screening applications often involve very unstructured and uncontrolled assessments. This dramatically increases the number of false positives,” they write. From the researchers’ point of view, border controls using AI lie detectors are therefore more problematic than use in the structured interrogation of suspects in a criminal case. In this case, it might be easier to find possible alternative explanations for the classification in question and perhaps rule them out after careful consideration.
However, a fundamental problem remains: “The use of artificial intelligence in lie detection is based on the assumption that it is possible to identify a clear indication or a combination of indications that indicate deception,” says Suchotzki. But even a century of research has not been able to prove, even theoretically, that such evidence even exists. It is precisely this point that could make AI in lie detection at least scientifically interesting: “Data-supported models could serve as a first step to gain more insight into the psychological mechanisms of deception,” write Suchotzki and Gamer. “Such an exploratory approach may be possible as long as it does not entail any direct conclusions for the application.”
Source: Kristina Suchotzki (University of Marburg) et al., Trends in Cognitive Sciences, doi: 10.1016/j.tics.2024.04.002