AI-based Methods to Support Opinion-Forming and Participation
Motivation
Forming political opinions is a cornerstone of informed participation in a democracy. It is the basis on which citizens actively participate in political decision-making processes at all levels of government. The way in which people form political opinions has changed significantly in recent years. Paired with the ubiquity of social media, the information landscape has become confusing. This means that people obtain their news and information from a wide variety of sources - often including individual channels such as WhatsApp groups, influencers on YouTube and TikTok or algorithmically selected posts on platforms such as X, Instagram or Facebook (the ‘newsfeed’) rather than or in addition to traditional, journalistic news sources.
Challenges of Political Opinion Formation
Due to these new dissemination options, information is not only broadcast by those who originally provided it. Rather, news and opinions are shared in fractions of a second and spread very quickly in more or less private settings. Facts, opinions and other information are often mixed together, taken out of context and presented in abbreviated form. This makes it increasingly difficult for individuals to stay on top of the news, categorise the information, or research for themselves whether what is being shared is actually true. There is also often a lack of context.
Many platforms also analyse user behavior and display content that matches a user's algorithmically determined interests and opinions. These so-called ‘filter bubbles’ and ‘echo chambers’ mean that people often only see information that supports their existing views and in turn share this with people who are of the same opinion - confirm pre-existing beliefs. There is no critical debate. Personal convictions become entrenched when people rarely encounter other opinions that hardly challenge them to question their views. This can lead to them seeing their views as universally valid, spreading false or one-sided information unnoticed and ultimately reinforcing differences of opinion / contributing to the further polarisation of opinions.
Risks for Vulnerable Groups
These challenges pose a significant risk for vulnerable groups in particular - people who are especially at risk from manipulation campaigns. These include young first-time voters (high and often near exclusive exposure to social media, inexperience), senior citizens (technological barriers to access, information overload) and people with cognitive impairments (difficulties in processing complex contexts, information overload).
The KonCheck Tool
In KonCheck, we are developing an innovative application (e.g. as a smartphone app and/or browser extension) that enables people to analyse information directly and with little effort on the fly. For example, a user of the KonCheck tool can mark a tweet on X or a section of text on a website and have further sources and interesting questions about the content displayed with the help of artificial intelligence. The aim is to deliberately display the entire spectrum of opinions, including contradicting statements or news items and breaking up filter bubbles / echo chambers.
For example, a ‘check’ of the controversial statement ‘The Americans blew up the pipeline’ (recently made live on television by a German politician) would link to articles that both attempt to support and refute this thesis. In addition, questions would be generated that help to delve deeper into the topic, such as ‘What is known about the perpetrators?’ or ‘Who profited from the sabotage of Nordstream?’.
It will also be possible to directly assess the truthfulness and trustworthiness of information. Firstly, by using artificial intelligence methods to check whether the content has potentially been created automatically (i.e. if the content is AI-generated). Secondly, by querying various fact databases and immediately displaying the result in the form of a trend.
In practice, the KonCheck tool should enable people to develop a higher tolerance and awareness towards dissenting opinions by engaging with them more frequently and thus making their own political attitude more differentiated overall. The tool will also make it easier to recognise manipulated content and fake news and thus reduce their influence on opinion formation.
Duration
08/2024 - 07/2026
Consortium
- Prof. Dr. Jens Gerken
- Kirill Kronhardt
- Prof. Dr. Matthias Hastall
- Moritz Hölzer
- Link: https://chip.reha.tu-dortmund.de/
- Prof. Dr. Jürgen Howaldt
- Prof. Dr. Johannes Weyer
- Link: https://sfs.sowi.tu-dortmund.de/
Westphalian University of Applied Sciences: https://www.en.w-hs.de/
Insitut für Internet-Sicherheit - if(is): https://internet-sicherheit.de/
- Prof. Dr. Matteo Große-Kampman
- Link: https://www.hochschule-rhein-waal.de/en
The project KonCheck is supported by the Daimler and Benz Foundation with a volume of 1,5 Mio. EUR.