About AI4Debunk

Project overview


Recognizing the persistent and evolving nature of disinformation, AI4Debunk focuses on the symbiotic relationship between humans and advanced AI tools. Our innovative approach involves bridging the sociological aspects of disinformation with concrete AI-based solutions to deter it.

Through AI4Debunk, users will gain access to resources, knowledge, and skills, empowering them to detect disinformation in the ever-changing digital landscape. Our priority is to develop user-friendly and inclusive tools to reach individuals of all ages, genders, interests, and online environments.

AI4Debunk responds to the European Commission call for a coordinated response to combat disinformation, and is fully in line with the European Democracy Action Plan (EDAP) principles, namely:

Promoting free and fair elections

Strengthening media freedom

Countering disinformation

Our Goals

The overarching objective of AI4Debunk is to develop human-centered, multimodal, and collaborative AI tools to fight disinformation and protect democratic values. Furthermore, the project is structured around 9 specific objectives:

AI4Debunk will deliver a comprehensive description and analysis of the entire disinformation process, counteracting the prevalent tendency to view it in isolated parts rather than as a whole.

AI4Debunk will consolidate all data in one place, irrespective of its modality (e.g.: audio, video, image), enabling simultaneous analysis, facilitating comparisons with existing datasets, and ultimately streamlining disinformation debunking.

AI4Debunk will establish a connection between AI experts, disinformation analysts and citizens, promoting collaborative efforts in the fight against disinformation.

AI4Debunk will perform thorough quality checks on the different modalities mapped in the knowledge graphs, employing advanced AI and ML techniques to develop modules for this purpose.
AI4Debunk will provide a standard toolbox for the battle against disinformation, enabling any software developer to craft their own solutions.
AI4Debunk will develop a range of innovative AI-powered interfaces tailored to different uses and users, recognising that there isn’t a single optimal solution for debunking disinformation.

AI4Debunk’s interfaces will prioritise the human factor and, therefore, will be user-centric and ergonomic, allowing citizens to check content at their convenience.

AI4Debunk will introduce the disinfoscore, a score of fakeness assigned to each content, accompanied by detailed explanations. This will be available in all interfaces.

AI4Debunk will support young people by creating educational materials aimed at teaching them how to critically assess information online and spot misleading or fake content.

Our Pillars


We have a multidisciplinary team that consists of experts in sociology, media and AI, ensuring a robust and comprehensive approach to our research.


We are committed to leading the charge against disinformation in Europe, driving progress through the deployment of new sophisticated detection tools.


We are open to collaboration with like-minded organisations and welcome the involvement of ecosystem stakeholders to collectively shape a trustworthy digital environment.

Our Work Plan

The AI4Debunk project is an innovation action funded by the Horizon Europe Programme under Grant Agreement No. 101135757. It is set to run for 48 months and comprises 18 work packages (WP).

WP1, WP2, WP3

Project management and coordination

Monitor the progress of the project implementation according to schedule, resources and EC guidelines and requirements. IPR and data management is included.


Information manipulation and threads of disinformation, fake news and propaganda

Provide a precise diagnosis of the interactions of fake news, disinformation, information manipulation and foreign interference in the EU policy making.


Identification of target groups, fake news and technological requirements

Identify adequate and effective technical means to prevent, deter and respond to disinformation that could be supported by AI-powered tools.

WP6, WP7

Design, creation and adaptation of knowledge graphs

Construct knowledge graphs that illustrate the structure of deceptive data/fake content. The resulting knowledge graphs will be continuously updated based on user and AI/ML modules feedback.

WP8, WP9

Development and implementation of AI/ML methods for disinformation detection

Build monomodal and multimodal Machine Learning models for fake news detection. A disinformation probability score (disinfoscore) will be established.

WP10, WP11

Definition, development and integration of different interfaces available for online citizens

Develop the debunking API and integrate it into 4 tools: web plug-in, collaborative platform, smartphone app, and AR/VR interface. This process involves user requirements analysis and testing.

WP12, WP13, WP14

Sociological assessment including gender analysis and multi-stakeholders’ recommendations

Ensure the tools are inclusive and citizen-friendly through sociological assessment, resilience mechanisms evaluation, gender equality analysis and multi-stakeholder feedback.

WP15, WP16, WP17

Communication, Dissemination & Exploitation

Design and implement a robust communication, dissemination and exploitation strategy to raise awareness and give visibility to the project activities.


Ethics requirements

Set out the ethical requirements that the project must comply with throughout its lifetime and provide guidelines to facilitate adherence.

Our Impact



New breakthrough scientific advancements in AI and ML disinformation detection methods.  Benchmarking of two types of knowledge graphs: monomodal and multimodal.



Development of new products with economic value, thanks to the open-source API.  Interfaces available for media partners.


Civil society

Better understanding of the risks of disinformation and how to counter/detect it, catalysing trust in online content. Debunking two hot topics: disinformation regarding the war in Ukraine and climate change.