Artificial intelligence can act as a writing assistant to help you write eloquent texts. But apparently it not only changes the way people express themselves, but also how they think. This is suggested by a study with more than 2,500 participants. When subjects wrote about a given political topic using AI autocompletion, their own views became closer to those their writing assistant put in their mouths. Even explaining that the AI was biased did not change the influence.
AI language models have found their way into many areas of everyday life. They answer questions, write any text if desired and can provide suggestions for auto-completion as integrated writing assistants in email and word processing programs. “The integration of generative AI into human-to-human communication raises questions about the impact of this technology on language use, perception of others, and interpersonal dynamics,” writes a team led by Sterling Williams-Ceci from Cornell University in New York. “There is also the broader question of the extent to which AI can influence the way people think.”
Subtle influence
To find out how AI influences us, Williams-Ceci and her colleagues asked 2,582 volunteers in two experiments to write texts about socially relevant topics, including the death penalty, fracking, genetically modified organisms and the right to vote for felons. Some of the test subjects were able to use AI-generated suggestions for autocompletion. However, the researchers had manipulated the AI so that it was biased. For example, in her proposals she always rejected the death penalty and the right to vote for criminals and supported fracking and the cultivation of genetically modified products. Following the writing task, the research team asked the participants about their attitudes towards the respective topic.
And indeed: “When using the AI assistant, the attitudes that respondents expressed after the task approached the position of the AI,” report Williams-Ceci and her team. “However, the majority of participants were unaware of the bias of AI suggestions and their influence.” But even if the researchers informed their test subjects before or after the task that the AI was delivering distorted suggestions, the influence on their views did not diminish – a surprising result, because warnings about influence usually lead to a certain level of immunity.
Omnipresent and not always neutral
When the volunteers were given a static AI-generated list of arguments for a position instead of dynamic autocomplete suggestions, they were significantly less likely to adjust their opinion accordingly. “This shows that the influence cannot be fully explained by the suggestions themselves, but rather by the better accessibility of the biased information,” explain the researchers. Autocompletion essentially puts the words in the test subjects’ mouths, so that they may identify with them more strongly.
From the research team’s perspective, these results are worrying. “Autocomplete is now ubiquitous,” says Williams-Ceci’s colleague Mor Naaman. “Three years ago it was little used and limited to short completions, but today Gmail, for example, suggests writing entire emails for you.” In addition, bias on the part of AI writing assistants is not a constructed scenario. “Numerous studies have shown that large language models and AI applications produce not only neutral information, but also very biased information, depending on how they were trained and implemented,” says Williams-Ceci. “This poses a risk that these systems may inadvertently or intentionally lead people to write biased viewpoints, which in turn can change people’s attitudes, according to decades of psychological research.”
Source: Sterling Williams-Ceci (Cornell University, Ithaca, New York, USA) et al., Science Advances, doi: 10.1126/sciadv.adw5578