Club ‘Journalist’ at the Union of Bulgarian Journalists, Sofia



EUalive, a pan-European media outlet and a member of the AI4Debunk consortium, organised a public discussion aimed at exploring how AI-powered tools can support efforts to counter fake news and disinformation. The event brought together journalists and citizens to discuss real-life needs, expectations, and concerns related to fact-checking, artificial intelligence, and media trust.
Participants
A total of 12 participants attended the focus group. The majority were professional journalists, predominantly over the age of 60, with generally more conservative views. Participants reported consuming information mainly through online news outlets, television, and print media, while identifying Facebook and online news platforms as the primary sources where they most often encounter disinformation.
Addressing Misinformation About the Event
The discussion opened with the debunking of false claims regarding the event itself. The Union of Bulgarian Journalists (UBJ) categorically denied allegations that it had invited “Belarusian propagandists” for EU-funded training on combating fake news. UBJ Chair stated that the accusations originated from a misleading publication by the Belarusian state outlet Minsk News and were later echoed by Belarusian opposition media without verification.
In reality, the event was a one-hour public discussion titled “Artificial Intelligence and Fake News: What Is the Working Solution?”, organised by EUalive and hosted at the UBJ premises. The discussion proceeded as planned despite the media controversy, highlighting how quickly disinformation can spread even within professional media circles.
Presentation of the AI4Debunk Project
The EUalive team lead by Georgi Gotev presented the AI4Debunk project, outlining its objectives, consortium partners, and progress to date. A representative from F6S provided additional insights into the technological aspects of the project and the role of partner organisations. It was emphasised that the project has a clearly defined and limited scope: supporting users in identifying and assessing disinformation, rather than solving broader systemic issues such as the generation of synthetic images or the overall regulation of AI.
Key Discussion Themes
Participants reflected extensively on the role of artificial intelligence in the information ecosystem. Several examples were discussed to illustrate how incomplete or selectively presented information can generate disinformation, including cases related to historical narratives and international news coverage.
A recurring theme was the dual nature of AI. While some participants viewed AI as a potentially powerful tool for fact-checking and verification, others expressed concerns about its misuse, particularly in generating synthetic text, images, and articles that may mislead audiences. Issues related to GDPR, source protection, and journalists’ ability to defend their work against AI-generated content were also raised.
The discussion highlighted that AI systems are only as reliable as the data they are trained on. Since online information is often inaccurate, biased, or intentionally misleading, participants agreed that AI cannot be expected to consistently produce objective truth. This limitation reinforces the importance of media literacy, human judgment, professional standards, and editorial responsibility.
Media Literacy, Trust, and Professional Responsibility
Several participants stressed that media literacy remains the most critical defence against disinformation, both now and in the future. Journalists and citizens alike must be able to challenge their own beliefs, step outside algorithmic “information bubbles,” and seek diverse sources. Concerns were raised about audiences expecting AI to deliver absolute truth, which may increase vulnerability to disinformation generated or amplified by AI systems.
A poll conducted during the discussion showed that most participants already use AI tools in their professional work. The moderator emphasised the importance of societal mechanisms for verifying information, as well as the need for education on how AI works, including prompt formulation and critical evaluation of AI outputs. The role of education systems, particularly the introduction of AI and media literacy in schools, was also discussed.
Copyright and Creative Industries
The issue of copyright emerged as a major concern. Participants noted that compensation for copyright infringement in Bulgaria is currently minimal, raising questions about how authors, journalists, artists, and musicians can protect their work in the age of generative AI. Some participants expressed strong views that AI poses a direct threat to creative industries, with one journalist arguing that AI should be banned from artistic production altogether.
The participants filled in a survey summarised below:
Exposure to Disinformation
Participants indicated that they most frequently encounter disinformation on Facebook and in online news media.
Verification Practices
When asked how they verify information they suspect may be false or misleading, respondents reported relying primarily on:
- Search engines
- Official sources,
- Discussions with colleagues, friends
Preferred Evidence Formats
- Short written explanations that clearly outline why content is misleading
- Direct links to reliable online sources supporting the correction or clarification
Reporting Disinformation
Participants were asked how they would prefer to signal or report disinformation or fake news. The most favoured reporting mechanisms included:
- Browser extensions, allowing users to flag content directly within their browsing environment
- Mobile applications, offering accessibility and ease of use
Preferred System Features
When asked which type of system would be most helpful for recognizing disinformation, respondents identified the following options:
- Mobile application
- Function integrated directly into platforms
- Public, searchable database of verified cases
- AI-assisted warnings or labels
Submission of Evidence and Feedback
Respondents expressed a preference for submitting evidence in the form of a simple link to an online source containing the relevant information. Additionally, there was strong agreement that users should receive feedback after submitting a report, with most respondents indicating that confirmation or follow-up would increase trust in and continued use of the tool.
Motivation for Using a Debunking Tool
When asked about their primary motivation for using a disinformation detection or reporting tool, respondents cited:
- Combating disinformation
- Improving the overall quality of information
- Contributing to public debate
Key Concerns and Risks
Finally, respondents identified several concerns related to tools that require users to submit evidence:
- Risk of incorrect validation of information
- Time required to use the tool
- Potential misuse by others
- Bias in automated systems
- Privacy and data protection concerns




