Those who firmly believe in conspiracy theories are usually not so easily dissuaded. A study now suggests that artificial intelligence could at least be able to do this to some extent. In the experiments, more than 2,000 people who believed in clearly refuted false information interacted with an AI chatbot that provided them with tailored, sourced counter-evidence. This reduced the subjects’ false assumptions by an average of 20 percent. According to the researchers, the results suggest that arguments and facts could be more effective in discussions with conspiracy theorists than previously thought.
When it comes to the role of artificial intelligence in connection with conspiracy theories, the focus is usually on the risks of the new technology. It makes it easier than ever for malicious actors to create fake content and use it to deceive and manipulate people. Since large language models usually convey their statements in a competent-sounding manner, AI-generated misinformation often seems particularly convincing – even if the content has no factual basis in reality.
But receptive to facts?
A team led by Thomas Costello from the Massachusetts Institute of Technology (MIT) in Cambridge has now shown how these persuasive abilities of AI can be used for good: In two experiments with a total of over 2,000 participants, the chatbot ChatGPT-4 Turbo was able to refute conspiracy theories individually, based on facts and in a comprehensible manner, so that conspiracy believers were less convinced of the false claims afterwards. “Belief in conspiracy theories is notoriously persistent. Influential hypotheses assume that they satisfy important psychological needs and are therefore not accessible to counter-evidence,” write Costello and his team. “Our results, on the other hand, suggest that many supporters of conspiracy theories can revise their views if presented with sufficiently convincing evidence.”
The test subjects said they believed in at least one conspiracy theory – including about Covid-19, the attacks on the World Trade Center on September 11, 2001, the 2020 American presidential election, and the moon landing. For the study, the researchers asked the subjects to use ChatGPT-4 Turbo to discuss a conspiracy theory of their own choosing. The AI was instructed to use arguments to dissuade the other person from their beliefs. In three rounds of interaction, the chatbot individually addressed the arguments put forward by the subjects and provided tailored, fact-based counter-evidence. Since the chatbot’s answers in many cases included several hundred words, the conversations lasted an average of 8.4 minutes.
Clever argumentation
Before and after the intervention, the subjects indicated how convinced they were of the respective conspiracy theory. The result: “Many conspiracy believers were actually willing to change their views when presented with convincing evidence to the contrary,” says Costello. “I was quite surprised at first, but when I read through the conversations, I was less skeptical. In each conversation, the artificial intelligence provided pages of very detailed explanations of why the respective conspiracy was wrong – and was also adept at being friendly and building a relationship with the participants.”
On average, the subjects’ belief in the respective conspiracy theory decreased by 20 percent. For about a quarter of the participants, the intervention even led them to say that they were less than 50 percent sure that the conspiracy theory was true. This effect remained stable even two months later. In the control group, which used ChatGPT to discuss topics unrelated to the conspiracy theory, belief in the conspiracy did not decrease.
Practical use questionable
However, it is unclear to what extent these results can lead to practical interventions against conspiracy theories. “Sitting down with a chatbot and ‘listening’ to it first requires a willingness to be convinced of the opposite of your opinion,” says Roland Imhoff from the Psychological Institute of Johannes Gutenberg University Mainz, who was not involved in the study. Another challenge is that in reality not all people who believe in conspiracy theories and spread them are genuinely interested in the truth. “A large part of conspiracy theory content is spread by actors who have very different interests, from political propaganda to political destabilization, and it would be naive to assume that the potential of AI will not be used by them as well,” says Imhoff.
Source: Thomas Costello (Massachusetts Institute of Technology, MIT, Cambridge, USA) et al., Science, doi: 10.1126/science.adq1814