People react emotionally and socially when interacting with chatbots. This harbours dangers that a new project at the Chair of Media Psychology aims to counteract.
Wurzburg/Germany, January 30, 2025 Artificial intelligence (AI) is changing the way people communicate. Conversations that until recently were only possible with a flesh-and-blood counterpart are now also possible with an AI: ‘Chatbots like ChatGPT mimic human communication behaviour almost perfectly – and so well that we get the feeling that we are dealing with a human and not a technical system,’ says media psychologist Dr Astrid Carolus from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany.
While there are many advantages, there are also disadvantages: ‘When interacting with AI systems, users react in a similar way to interpersonal encounters. Even if they are aware that they are communicating with a machine, they tend to become social and emotional,’ explains the Würzburg researcher. This can be problematic, for example if people don’t think and disclose sensitive data or uncritically follow the AI’s recommendations.
Offering users support
This is where a new project by Astrid Carolus comes in: She wants to analyse the psychological mechanisms of human-AI interaction and, based on the results, develop support measures that protect against the dangers of careless AI use. The DATEV Foundation Future (Nuremberg) is funding the project.
The first step is to analyse psychological aspects of human-AI interaction: experiments in the laboratory and in real everyday situations are used to test the extent to which users behave in interactions with chatbots in a way that they would otherwise only do in conversations with a human counterpart. ‘For example, we are interested in whether gender stereotypes or politeness norms also apply in encounters with chatbots,’ explains the researcher. She is also investigating which psychological characteristics and mechanisms favour such reactions.
Based on the results, the JMU scientist then wants to develop learning units with which users can reflect on their reactions to AI systems, recognise manipulative attempts by AI and deal with them confidently. The public should be able to access the first results of the analyses and learning units on an online platform by mid-2026.
Translated with DeepL.com
ImageSource
The psychology of interaction with chatbots Julius H. on Pixabay