How politically neutral are ChatGPT and Co?

How politically neutral are ChatGPT and Co?

What happens when AI models complete the Wahl-O-Mat? © Kenneth Cheung/ iStock

Artificial intelligence has long been part of our everyday lives – we use it to inform ourselves and use it to summarize complex or long content. But how neutral are the AI ​​models when it comes to political positions? Researchers have now investigated this using the Wahl-O-Mat from the last federal election. They asked the AI ​​models ChatGPT, Grok and Deepseek to classify the political questions presented in this program. It became clear that all three chatbots were not completely neutral, but showed a slight tendency towards political positions from the center-left spectrum. The team believes this underlines that chatbot responses to political issues should always be critically examined and that more transparency is needed.

You can grade essays, moderate social media content, summarize news and check applications: artificial intelligence is no longer just used to generate content, but also to classify it. This also applies to political content. It is all the more important that large language models such as ChatGPT, Claude and Co are not biased or partial. If the systems are not neutral in their assessment of political positions, this can influence both public debates and decision-making in elections. Previous studies have already shown that AI models, for example, answer questions about political conflicts and wars differently depending on the language used. The assumed or actual source of the text classified by the AI ​​also plays a role: If the chatbots believed that a text about Taiwan, for example, came from a Chinese source, they rated it as less credible – including the Chinese AI model Deepseek.

Positions of the center-left spectrum are preferred

Buket Kurtulus and Anna Kruspe from the Munich University of Applied Sciences have now examined what the political attitudes and possible distortions in ChatGPT and Co also look like in relation to German politics. The starting point was the Wahl-O-Mat for the 2025 federal election. This contained 38 theses on topics such as climate, migration and the economy. These are rated by the users from their perspective as “agree”, “neutral” or “disagree” and the system then determines which party most closely agrees with this opinion. In their study, Kurtulus and Kruspe had the AI ​​models ChatGPT, Grok and DeepSeek take on the role of the users: The chatbots were asked to independently evaluate all 38 theses – as if they were filling out the Wahl-O-Mat themselves. Each thesis was asked 100 times in German and English to exclude random fluctuations and the potential influence of the input language in the answers.

AI RESULTS
Wahl-O-Mat results from ChatGPT, Grok and DeepSeek: Their political positions lie more on the center-left spectrum. © Kruspe/Kurtulus

The result: All three AI models took an independent political position with individual answer patterns when filling out the Wahl-O-Mat. “Grok is the only model with a notable number of non-responses and also shows the highest variation between its answers,” the team reports. “Deepseek, on the other hand, has the most consistent behavior with hardly any deviations between runs.” However, all three models agreed on one point: they tended to agree more with the positions of the center-left spectrum, especially Alliance 90/The Greens and the SPD. “But this tendency is only slightly pronounced – an orientation rather than a hard partisanship,” emphasize Kurtulus and Kruspe. It was also noticeable that the models chose the answer option “neutral” more often than the positions in the party programs – an indication of a certain caution or hedging logic in the systems. All three chatbots showed the lowest agreement with the AfD.

Question about transparency, regulation and media competence

Overall, the researchers were surprised by the results of their study: “It is noteworthy that the models all tended to agree, so there were not very different political tendencies,” said Kurtulus and Kruspe. In her opinion, these results are particularly relevant given the increasing use of AI tools for political information. Even if the observed bias is more of a tendency than partisanship, the results show that ChatGPT and Co are not politically neutral. With a view to upcoming local elections such as those in Bavaria on March 8, 2026, the question of transparency, regulation and media competence arises when using AI. “We run the risk that the AI ​​only reflects certain perspectives. In the long term, there is also a risk of political influence,” says Kruspe.

More information is needed about how the AI ​​models work, and people also need to be better informed that AI results should be critically examined. According to the researchers, it is also necessary to create technical alternatives: “We need approaches for independent, European AI models that are based on transparency and a conscious diversity of data,” emphasizes Kruspe.

Source: Buket Kurtulus and Anna Kruspe (Munich University), European Conference on Artificial Intelligence (ECAI)

Recent Articles

Related Stories