ChatGPT influences moral judgments of users

ChatGPT

How much do ChatGPT and Co influence us? © Userba011d64_201/ iStock

Should one sacrifice one human to save five? The artificial intelligence ChatGPT has no firm opinion on this well-known moral dilemma. When researchers asked him relevant questions, the bot sometimes argued for one side and sometimes for the other. Nevertheless, human test persons in an experiment were influenced by the corresponding advice. This was true even if they were informed that the assessment came from a bot. Although users in most cases opted for the position represented by ChatGPT, 80 percent said they believed the bot had not influenced them.

The artificial intelligence-based chatbot ChatGPT, which was released in December 2022, is capable of impressive feats: it can communicate like a human, solve complicated exam tasks, write poetry and design computer code. However, multiple tests have shown that in some cases the bot freely makes up answers and sources, provides questionable advice, and in some cases even spreads misinformation. But how does ChatGPT behave when it comes to ethical decisions? Does artificial intelligence have a clear position? And to what extent are users influenced by the bot's moral advice?

Moral questions to the AI

A team led by Sebastian Krügel from the Technical University of Ingolstadt has dealt with this question. "In order to elicit moral advice from ChatGPT, we first asked if it was right to sacrifice one person to save five," the researchers report. All experiments took place in December 2022, around two weeks after the release of ChatGPT. The bot's responses were inconsistent across multiple attempts. Mal ChatGPT argued that every life is precious and therefore it is never acceptable to sacrifice one person for the sake of five other people. In other cases, the chatbot wrote that sometimes it's necessary to make tough decisions and it's important to save as many lives as possible, even if it means sacrificing a life.

"The conflicting responses show that ChatGPT does not have a firm moral stance," the authors conclude. "However, this lack does not prevent it from providing moral advice." But would users be influenced by this moral advice? To find out, the team presented 767 American subjects with one of two moral dilemmas: In both cases, a train hurtles toward a group of five people. In the first case, the test subject would have the opportunity to throw a switch so that the train moves to another track on which only one person is standing, who would die as a result. In the second case, the train could be stopped by pushing a very fat person off a bridge in front of the train, saving the five other people. Additionally, the researchers presented the subjects with arguments from ChatGPT that were either for or against sacrificing the individual.

Unnoticed influence

If the users then make a decision, they often follow the position represented by ChatGPT. It made no difference whether the researchers had explained to them that the moral assessment came from a bot or whether they had claimed that a human moral advisor had made the relevant statements. But were the subjects aware of how significantly they had allowed themselves to be influenced? When asked by the researchers, 80 percent of the participants replied that they had made their decision regardless of the arguments read. "This result shows that users underestimate the impact of ChatGPT on their moral judgments," the researchers write. "In many cases, they adopted ChatGPT's random perspective as their own."

From the point of view of Krügel and his colleagues, these results raise important questions about how the ethical limitations of AI systems can be dealt with in the future. "One possibility would be to program chatbots in such a way that they refuse to answer moral questions or always provide arguments for both sides and address reservations," the researchers write. One problem could be that while chatbots can easily be trained to recognize known dilemmas, they are likely to fail when it comes to more subtle moral questions.

It is therefore important to educate users so that they better understand the possibilities and limits of artificial intelligence. "The basis is that users must always be informed that they are interacting with a bot," the researchers write. “But this transparency is not enough. Our results showed that knowing the advice came from a bot didn't detract from the impact of the advice."

Source: Sebastian Krügel (Ingolstadt University of Applied Sciences) et al., Scientific Reports, doi: 10.1038/s41598-023-31341-0

Recent Articles

Related Stories