Dr. Vera Schmitt

Resilient against disinformation – using AI in the fight for social cohesion


Dr Vera Schmitt works on the largest German-language AI platform for disinformation detection. In this interview, she provides answers to current challenges.

Berlin/Germany, February 13, 2025. Disinformation campaigns that threaten social cohesion are increasing rapidly worldwide. But how can society respond to this challenge? Dr Vera Schmitt and her research group XplaiNLP at the Quality and Usability Lab at TU Berlin have a clear answer: artificial intelligence (AI). With innovative projects such as VeraXtract and news-polygraph, they are working on the largest German-language AI platform for recognising disinformation. Dr Schmitt has raised over 4 million euros in third-party funding to set up the group.



„Unser Ziel ist es, die gesellschaftliche Widerstandsfähigkeit gegenüber Desinformation zu stärken“, erklärt Dr. Schmitt im ausführlichen Interview (https://www.tu.berlin/go279457/). Die promovierte Computerwissenschaftlerin entwickelt mit ihrer Forschungsgruppe Systeme zur intelligenten Entscheidungsunterstützung, um Desinformationsnarrative aufzudecken, Fakten transparent zu machen und langfristig den Zusammenhalt in der Gesellschaft zu fördern.

Kampf gegen Desinformationsnarrative

Das neuste Projekt VeraXtract, zielt auf die Analyse und Erkennung komplexer Desinformationsnarrative ab. „Wir wollen nicht nur einzelne Falschmeldungen identifizieren, sondern zugrundeliegende Narrative entschlüsseln – von Verschwörungstheorien bis hin zu politisch gesteuerten Kampagnen “, sagt Dr. Schmitt. Mit einem KI-gestützten „Narrative Monitoring Tool“ sollen JournalistInnen und die Öffentlichkeit künftig besser verstehen können, wie solche Narrative funktionieren und sich verbreiten. Das Tool liefert nachvollziehbare Erklärungen für seine Entscheidungen, um die Transparenz und Vertrauenswürdigkeit der Ergebnisse zu gewährleisten.‘Our aim is to strengthen society’s resilience to disinformation,’ explains Dr Schmitt in an in-depth interview (https://www.tu.berlin/go279457/). Dr Schmitt, who holds a doctorate in computer science, and her research group are developing systems for intelligent decision support in order to uncover disinformation narratives, make facts transparent and promote cohesion in society in the long term.

Fighting disinformation narratives

The latest project, VeraXtract, aims to analyse and identify complex disinformation narratives. ‘We not only want to identify individual false reports, but also decode underlying narratives – from conspiracy theories to politically driven campaigns,’ says Dr Schmitt. With an AI-supported ‘Narrative Monitoring Tool’, journalists and the public should be able to better understand how such narratives work and spread in the future. The tool provides comprehensible explanations for its decisions in order to ensure the transparency and trustworthiness of the results.

The second major project, news-polygraph, is aimed specifically at journalists. The aim is to develop a multimodal platform that analyses disinformation in texts, images, audio and video content. ‘We are planning the first demo version in April – journalists will then be able to test it,’ announces Dr Schmitt. But the challenges are great: ‘Fact checks are actually always carried out too late. Disinformation remains in people’s minds even after it has been refuted,’ emphasises the scientist. Her vision: AI tools that recognise disinformation in real time and intervene preventively across the entire internet.

AI as a bridge between people

Dr Schmitt is convinced that AI not only harbours risks, but can also help to build bridges between people: ‘Algorithms could be redesigned in such a way that they not only reinforce our previous likes and preferences, but also specifically present content that reflects a greater diversity of opinions, cultures or points of view. This could help to break through the familiar filter bubbles and show us a more balanced picture of the world.’



Find out more about how AI can support social cohesion and how young people can be protected from disinformation in the interview (https://www.tu.berlin/go279457/).

About the XplaiNLP research group:

Dr Vera Schmitt heads the XplaiNLP research group at the TU Berlin. With over four million euros in third-party funding and 24 team members, the XplaiNLP research group is one of the leading groups in the field of disinformation detection and explainable AI in Germany.

ImageSource
Dr. Vera Schmitt


Beitrag veröffentlicht

in

von

Schlagwörter: