About AI4Debunk

Methodology and outcomes

Our Technologies

Given the multimodal nature of news nowadays, and the importance of the context to understand the meaning, AI4Debunk leverages the power of cutting-edge monomodal and multimodal machine learning technologies as well as knowledge graphs to extract relevant information from data and detect disinformation patterns and trends. We also propose a human-in-the-loop (HITL) mechanism to enhance data quality and foster greater trust in the process.

Language Models and Large Language Models

Vision Models and Large Vision Models

Speech Processing

Multimodal Machine Learning

Human-in-the-loop Mechanisms

Knowledge Graphs

Our Methodology

AI4Debunk’s methodology taps into human-AI collaboration to effectively fight disinformation – through content analysis, fact-checking, and network analysis – and catalyse trust between users and online information sources. This holistic approach includes:

Multimodal knowledge graph: integrating diverse data types, such as text, images, videos and audio, to provide a comprehensive view of a topic and reveal inconsistencies across sources.

Monomodal knowledge graph: serving as a backup solution that transforms multimodal data into text ( using object detection, image description, action detection in video or audio, etc.).

Monomodal fake news detection modules: distinguishing between authentic and manipulated content (artificially modified or synthetically created) for text, image, video and audio data.

Multimodal fake news detection modules: integrating results from monomodal fake news detectors and contextual information from the knowledge graph to classify news as real or fake based on data patterns in text, images, videos and audio.

AI4Debunk is dedicated to enhancing AI-based models to deliver explanations and confidence on their fake news detection. This will highlight:

  • which aspects of the news piece are most relevant in the detection of fake news;

  • which modalities (text, image, audio or video) have a stronger impact on the final decision.

AI4Debunk will conduct two case studies to validate its holistic approach to fighting disinformation:

  1. Russian propaganda related to the war in Ukraine
  2. Manipulated content on climate change

These case studies will encompass the identification of target groups, disinformation sources, and propagation methods, and will lead to the formulation of technical requirements and functional needs for AI4Debunk’s interfaces.

The primary goal is to understand disinformation processes, frame and model them, and subsequently develop tools for analysis and counteraction.

Both case studies will be mapped in knowledge graphs. Such graphs will offer contextual insights for further fake news detection. Regular updates will be facilitated through curated citizens’ feedback and the integration of new curated data.

As part of its methodology, AI4Debunk will perform an overall sociological assessment of the tools developed and their impact for the two case studies:

  • Reports and in-depth analysis of resilience mechanisms;
  • Engagement with stakeholders through surveys, local group meetings, and focus group discussions;
  • Gender equality and engagement processes to operationalise policies, programs, and interventions in disinformation mechanisms.

AI4Debunk is committed to promoting media literacy and will create didactic resources to support classroom training in fake news detection, equipping the younger generations with the skills to critically assess and filter information without being deceived. These resources are:

  • Comic books
    Developed based on the project research into fake news, the comic books serve as an engaging tool to raise awareness and provide guidelines on how to detect disinformation online. They will be accompanied by questionnaires to assess learning outcomes.
  • Games
    Designed and tested in collaboration with teachers, the games offer students an interactive experience, illustrating the consequences of fake news and delivering practical insights.

Both materials will initially be distributed to secondary schools in the Netherlands and Belgium, with plans for broader availability in secondary schools across Europe.


Building on the methodology outlined above, AI4Debunk will develop a debunking API and a suite of interfaces to assist users in assessing the accuracy of online content.


Open source. Integrates AI and ML algorithms to calculate a disinformation probability score (disinfoscore) and flag suspicious content.

Upon receiving content as input, the API will promptly return its disinfoscore. In the case of manipulated content, it will also identify the regions where the signal of the original content could have been tampered with. Additionally, it will provide factual/contextual information that corroborates the content’s score.



Designed for web browsers and social media platforms. It will provide real-time notifications to users, offering insights into the reliability and credibility of the content they are currently viewing, regardless of its format.


Collaborative platform built in a Wikipedia format. Users will be able to directly report suspicious content that will then be checked by human experts (senior analysts working for the media partners) and potentially removed from circulation. 


Developed for smartphones and tablets. It will verify the accuracy of online content, including news articles, social media posts, product reviews and images. Users will be able to input information (URLs, text or images) and the app will identify its risk level for falseness.



Integrated into the mobile app and compatible with AR glasses/headsets, smartphones, tablets, or wearable technology. It will deliver a more immersive and engaging experience for users, detecting whether data inputs from both the virtual and the physical worlds (e.g.: written text, spoken language, images/videos and real-time information) are true or fake. 

Let’s Recap

This is how all the pieces come together…