Multi-stakeholder meeting

Multi-stakeholder event in Brussels

On September 5th, Pilot4DEV hosted a multi-stakeholder meeting at the Press Club in Brussels, bringing together leading experts, researchers, and practitioners to exchange knowledge, discuss strategies to fight fake news, and explore how AI tools can be designed to effectively tackle these challenges.

At the event, the AI4Debunk project along with the initial findings of our ongoing online poll (🇬🇧English, 🇫🇷French, 🇩🇪German, 🇳🇴Norwegian, 🇬🇷Greek, 🇱🇻Latvian, 🇺🇦Ukranian) were presented. This set the stage for a lively roundtable discussion, the key highlights of which are detailed below.

1. Projects tools design

The discussion not only emphasized the societal aspects of disinformation but also examined the individual psychological factors that make someone vulnerable to manipulated information online. It was suggested that these psychological elements be considered when designing the project’s AI tools aimed at combating disinformation.

Participants expressed interest in the tools’ design, questioning the selection of the four specific interfaces – Web plug-in, Disinfopedia, App and AR/VR – and how they align with the project’s goals. Concerns were raised about ensuring the tools are accessible to the general public, especially to those who might benefit the most from them, such as elderly or less tech-savvy individuals. Although there was some skepticism regarding whether these groups would readily adopt apps or AR/VR systems, the general consensus was that the tools, especially the App, must be user-friendly.

A major focus of the discussion was on building trust as a credible source, which participants identified as crucial for encouraging widespread use of the tools. Several stakeholders acknowledged the challenge of establishing trust, even for recognized fact-checkers. Vulnerable individuals, in particular, were seen as less likely to trust AI-driven fact-checking tools, while more informed users might be willing to experiment with them.  

2. Disinformation vs. Misinformation

Participants also inquired about our definitions of “disinformation” and “misinformation”. This led to a broader discussion on the concept of fake news and its various shades: false information exists on a spectrum ranging from unintentional misinformation to deliberate disinformation.

The way information is framed, especially through the use of emotional language, was seen as a factor that influences how readers interpret content. Some noted that there are already AI systems that can assess the emotional tone of articles, yet caution was expressed about the risk of overgeneralizing different types of false information.

Given the complexity of determining truth, fact-checkers continue to rely on human analysis—an approach also central to our Disinfopedia. There was debate over whether established media outlets should be notified when they publish incorrect information, with some advocating for greater collaboration between journalists and fact-checkers to ensure the accuracy of news. Transparency from journalists when they make mistakes was viewed as key to maintaining public trust.

3. Malicious actors and foreign interference

The discussion then shifted towards the role of malicious actors in spreading disinformation. Identifying these actors and tracing the origins of such threats is often incredibly difficult. The DisinfoLab team shared insights from their extensive research on this topic, pointing to their published findings available in this report.

Participants were curious about our own experiences in tracking malicious actors. Drawing from our case study, we highlighted how Russian disinformation tactics have proven to be particularly adaptive. For instance, their strategy on platforms like TikTok focuses on attracting younger, liberal audiences with engaging content, gradually introducing propaganda after first building trust. This approach of embedding disinformation in narratives that resonate with specific demographics poses a growing threat, especially on platforms with weaker disinformation controls, like TikTok, compared to other platforms, such as Meta or X (formerly Twitter).

4. AI and critical thinking

Another important point raised was the need to avoid relying blindly on AI when countering fake news and instead encourage curiosity and understanding of how AI systems work.

Some participants called for initiatives to “de-mystify AI” and large language models, arguing that people often engage passively with AI without comprehending its underlying processes. They suggested that fostering curiosity about machine learning from an early age could help in making these tools more accessible to the general public.

This is actually part of the rationale behind the introduction of a comic book as an educational tool in our project, which sparked a discussion about whether educating people about AI’s role in fighting disinformation could lead to broader adoption of these tools beyond just professionals.

The insights gained from this session will help shape our next steps. As the AI4Debunk project progresses, we look forward to continuing these collaborative efforts with stakeholders and refining our tools.

We stay committed to fostering further dialogue and innovation in the battle against disinformation, working together to build a more informed and resilient society.

More from AI4Debunk

The University of Latvia hosted a multi-stakeholder meeting in Riga with representatives from government, business, academia, and NGOs to discuss collaboration pathways in the fight against disinformation.
Georgi Gotev, founder and senior editor at EURACTIV Bulgaria, shares his personal experiences battling fake news as a journalist over the years, exploring the specific disinformation landscape in Bulgaria.
AI4Debunk participated in the LAMPA Conversation Festival, which took place on July 6 in Cēsis, Latvia.