AI-generated media: deceptively real

AI

Texts, images and videos generated by artificial intelligence are now almost ubiquitous.© CISPA

Artificial intelligence can generate photorealistic images, write news texts and imitate human speech – so well that the artificially generated media can often no longer be distinguished from human works. Based on online surveys in Germany, China and the USA, a study shows that people can usually only guess which texts, images and audio are real and which are artificially generated. Even a high level of media literacy is of little help.

At the end of November 2022, the chatbot ChatGPT was released, which made generative artificial intelligence available to the general public and sparked many discussions about how to deal with AI. How does the artificially generated content affect our understanding of truth and authenticity? What does this mean for our society? And most importantly: How can we distinguish real, human-made content from AI-generated content?

Human or AI?

A team led by Joel Frank from the Ruhr University Bochum already addressed this question in the summer of 2022, even before the publication of ChatGPT. In a large online survey between June and September 2022, the researchers asked around 3,000 people from Germany, China and the USA to classify texts, images and voice recordings as being created by a human or an artificial intelligence. The researchers have now published the results on the preprint server ArXiv and also presented them at a conference in San Francisco.

In the experiment, half of the content presented was human-made and the other half was AI-generated. In Germany, news texts from the Tagesschau served as text examples written by humans. The team created the AI ​​examples using ChatGPT’s predecessor GPT3 from OpenAI. The images used were real photos of people and photorealistic portraits that had been generated using Nvidia’s StyleGAN image generator. For the voice recordings, the researchers used excerpts from literature that were read out either by a human or by a text-to-speech generator.

Test subjects can only guess

The result: “Across all media types and countries, we found that artificially generated examples are almost indistinguishable from ‘real’ media,” the team reports. “The participants mostly rated artificially generated media as being created by humans. When it came to images, they even performed worse than when guessing randomly.” The German test subjects thought almost 79 percent of the AI-generated images were real photos, but only classified just under 71 percent of the photos actually taken by humans as real. A similar trend was also evident in the USA. Chinese test subjects correctly classified AI-generated texts, photos and audio more often, but also thought almost half of the real examples were AI-generated. This suggests that they also mostly guessed, but were more suspicious when doing so.

Frank and his team also collected socio-biographical data, knowledge of AI-generated media, and factors such as media literacy, holistic thinking, general trust, cognitive reflection, and political orientation as possible influencing factors. But even though younger people and those who were more familiar with AI-generated media performed slightly better, their results were largely guesswork. “Even across different age groups and for factors such as educational background, political attitudes, or media literacy, the differences are not very significant,” reports co-author Thorsten Holz from the CISPA Helmholtz Center for Information Security in Saarbrücken.

Challenge for politics

From the researchers’ point of view, the results are cause for concern – especially since the technology has already developed further since the survey was conducted and the content generated now mostly seems more realistic. “Artificially generated content can be misused in many ways,” says Holz. “We have important elections this year, such as the elections to the EU Parliament or the presidential election in the USA: AI-generated media can very easily be used to influence political opinion. I see this as a great danger to our democracy.”

According to the researchers, machine recognition of AI-generated content is not a solution either. “It’s a race against time,” says Holz’s colleague Lea Schönherr. “Media created using newly developed methods for generating content with AI is becoming increasingly difficult to recognize using automatic methods.” The research team therefore sees the responsibility as lying with politicians: “Only careful legislation that takes human perception and human values ​​into account can mitigate the harmful effects of artificially generated media without hindering its positive effects,” says the team.

Source: Joel Frank (Ruhr University Bochum) et al., ArXiv, doi: 10.48550/arXiv.2312.05976

Recent Articles

Related Stories