On November 1st, a multi-stakeholder meeting was held at the House of Europe in Riga. The session gathered 12 representatives from various sectors, including government ministries, the business community, research institutions, and non-governmental organizations.
Professor Inna Šteinbuka from University of Latvia, the project coordinator, opened the discussion by presenting the AI4Debunk project. She highlighted the importance of regular coordination and cooperation among all stakeholders engaged in addressing artificial intelligence (AI) and disinformation. She emphasized that broad dissemination of the project’s outcomes would be critical, with stakeholder networks playing an essential role in ensuring these results reach the necessary audiences.
Disinformation, AI and regulation
A central theme of the discussion was the recognition of disinformation as a “new normal.” Participants observed that, particularly in the Baltic States, the prevalence, influence, and impact of disinformation are well understood. Concerns were raised regarding increasing foreign interference in European politics, notably from Russia and China, which poses significant challenges to regional stability.
Regarding AI’s role, attendees generally agreed that AI alone cannot fully address the complexities of disinformation. A more comprehensive and systemic approach is needed, with AI tools tailored to the requirements of specific target groups. For instance, media outlets and social media platforms might deploy AI to support fact-checking and disinformation tracking. Participants emphasized the importance of identifying the distinct needs of these groups before designing AI tools.
The conversation also touched on regulatory mechanisms. While participants commended the adoption of the EU AI Act, they raised questions about its implementation and practical outcomes. There was broad support for creating a transparent directory of media entities, detailing ownership, beneficiaries, and funding sources, which would assist in tracing disinformation.
AI tools to fight disinformation: challenges and opportunities
Trust in technology emerged as a key issue. Survey data indicates a declining public trust in advanced technological tools. This skepticism toward AI’s capability to detect and curb disinformation calls into question of effectiveness and acceptance of tools to be elaborated in the end of the AI4Debunk project.
Challenges in implementing AI-based tools to combat disinformation were also discussed. A significant concern was the ongoing maintenance, upgrading, and training required for these tools in post-project period. Participants emphasized the importance of the “human-in-the-loop” approach, which addresses the need for ethical decision-making, media literacy, an adaptive education system, and lifelong learning initiatives.
The role of NGOs in countering disinformation was examined from multiple perspectives. Latvia has a robust network of NGOs focusing on ICT issues, which could be instrumental in disseminating the AI4Debunk project results. In collaboration with the project team, NGOs could develop tailored information packages for different audiences. Moreover, NGOs could offer valuable insights into the specific needs of various target groups regarding AI solutions.
Overall, this multi-stakeholder meeting proved to be highly constructive. It served as a foundation for ongoing collaboration and potential future partnerships to effectively disseminate the outcomes of the AI4Debunk project.