About AI4Debunk
Methodology and outcomes
Our Technologies
Given the multimodal nature of news nowadays, and the importance of the context to understand the meaning, AI4Debunk leverages the power of cutting-edge monomodal and multimodal machine learning technologies as well as knowledge graphs to extract relevant information from data and detect disinformation patterns and trends. We also propose a human-in-the-loop (HITL) mechanism to enhance data quality and foster greater trust in the process.
Language Models and Large Language Models
Vision Models and Large Vision Models
Speech Processing
Multimodal Machine Learning
Human-in-the-loop Mechanisms
Knowledge Graphs
Our Methodology
Multimodal knowledge graph: integrating diverse data types, such as text, images, videos and audio, to provide a comprehensive view of a topic and reveal inconsistencies across sources.
Monomodal knowledge graph: serving as a backup solution that transforms multimodal data into text ( using object detection, image description, action detection in video or audio, etc.).
Monomodal fake news detection modules: distinguishing between authentic and manipulated content (artificially modified or synthetically created) for text, image, video and audio data.
Multimodal fake news detection modules: integrating results from monomodal fake news detectors and contextual information from the knowledge graph to classify news as real or fake based on data patterns in text, images, videos and audio.
AI4Debunk is dedicated to enhancing AI-based models to deliver explanations and confidence on their fake news detection. This will highlight:
- which aspects of the news piece are most relevant in the detection of fake news;
- which modalities (text, image, audio or video) have a stronger impact on the final decision.
AI4Debunk will conduct two case studies to validate its holistic approach to fighting disinformation:
- Russian propaganda related to the war in Ukraine
- Manipulated content on climate change
These case studies will encompass the identification of target groups, disinformation sources, and propagation methods, and will lead to the formulation of technical requirements and functional needs for AI4Debunk’s interfaces.
The primary goal is to understand disinformation processes, frame and model them, and subsequently develop tools for analysis and counteraction.
Both case studies will be mapped in knowledge graphs. Such graphs will offer contextual insights for further fake news detection. Regular updates will be facilitated through curated citizens’ feedback and the integration of new curated data.
As part of its methodology, AI4Debunk will perform an overall sociological assessment of the tools developed and their impact for the two case studies:
- Reports and in-depth analysis of resilience mechanisms;
- Engagement with stakeholders through surveys, local group meetings, and focus group discussions;
- Gender equality and engagement processes to operationalise policies, programs, and interventions in disinformation mechanisms.
AI4Debunk is committed to promoting media literacy and will create didactic resources to support classroom training in fake news detection, equipping the younger generations with the skills to critically assess and filter information without being deceived. These resources are:
- Comic books
Developed based on the project research into fake news, the comic books serve as an engaging tool to raise awareness and provide guidelines on how to detect disinformation online. They will be accompanied by questionnaires to assess learning outcomes. - Games
Designed and tested in collaboration with teachers, the games offer students an interactive experience, illustrating the consequences of fake news and delivering practical insights.
Both materials will initially be distributed to secondary schools in the Netherlands and Belgium, with plans for broader availability in secondary schools across Europe.
OUR INTERFACES
API
Open source. Integrates AI and ML algorithms to calculate a disinformation probability score (disinfoscore) and flag suspicious content.
Upon receiving content as input, the API will promptly return its disinfoscore. In the case of manipulated content, it will also identify the regions where the signal of the original content could have been tampered with. Additionally, it will provide factual/contextual information that corroborates the content’s score.
Web
plug-in
Disinfopedia
App
Developed for smartphones and tablets. It will verify the accuracy of online content, including news articles, social media posts, product reviews and images. Users will be able to input information (URLs, text or images) and the app will identify its risk level for falseness.