Scientific Publications

Find all the scientific publications produced by AI4Debunk partners, presenting the latest scientific findings of the project.

Authors:
Axel Brando, Lucas Berry, David Mege

Abstract:
Generative diffusion models, notable for their large parameter count (exceeding 100 million) and operation within high-dimensional image spaces, pose significant challenges for traditional uncertainty estimation methods due to computational demands. In this work, we introduce an innovative framework, Diffusion Ensembles for Capturing Uncertainty (DECU), designed for estimating epistemic uncertainty for diffusion models. The DECU framework introduces a novel method that efficiently trains ensembles of conditional diffusion models by incorporating a static set of pre-trained parameters, drastically reducing the computational burden and the number of parameters that require training. Additionally, DECU employs Pairwise-Distance Estimators (PaiDEs) to accurately measure epistemic uncertainty by evaluating the mutual information between model outputs and weights in high-dimensional spaces. The effectiveness of this framework is demonstrated through experiments on the ImageNet dataset, highlighting its capability to capture epistemic uncertainty, specifically in under-sampled image classes.

Type of publication:
Conference paper

Publisher:
The 40th Conference on Uncertainty in Artificial Intelligence

Access here

Authors:
Coccomini D.A., Caldelli R., Gennaro C., Fiameni G., Amato G., Falchi F.

Abstract:
In recent times, the increasing spread of synthetic media, known as deepfakes has been made possible by the rapid progress in artificial intelligence technologies, especially deep learning algorithms. Growing worries about the increasing availability and believability of deepfakes have spurred researchers to concentrate on developing methods to detect them. In this field researchers at ISTI CNR’s AIMH Lab, in collaboration with researchers from other organizations, have conducted research, investigations, and projects to contribute to combating this trend, exploring new solutions and threats. This article summarizes the most recent efforts made in this area by our researchers and in collaboration with other institutions and experts.

Type of publication:
Conference paper

Publisher:
CEUR Workshop Proceedings

Access here

Authors:
Stanciu, Cristian and Ionescu, Bogdan and Cuccovillo, Luca and Papadopoulos, Symeon and Kordopatis-Zilos, Giorgos and Popescu, Adrian and Caldelli, Roberto

Abstract:
Synthetic media generation and manipulation have seen rapid ad- vancements in recent years, making it increasingly easy to create multimedia content that is indistinguishable to the human observer. Moreover, generated content can be used maliciously by individ- uals and organizations in order to spread disinformation, posing a significant threat to society and democracy. Hence, there is an urgent need for AI tools geared towards facilitating a timely and ef- fective media verification process. The MAD’24 workshop seeks to bring together people with diverse backgrounds who are dedicated to combating disinformation in multimedia through the means of AI, by fostering an environment for exploring innovative ideas and sharing experiences. The research areas of interest encompass the identification of manipulated or generated content, along with the investigation of the dissemination of disinformation and its societal repercussions. Recognizing the significance of multimedia, the workshop emphasizes the joint analysis of various modalities within content, as verification can be improved by aggregating multiple forms of content.

Type of publication:
Conference paper

Publisher:
Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation

Access here

Authors:
Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo

Abstract:
Looking at a video sequence where a foreground person is represented is not as time ago anymore. Deepfakes have revolutionized our way to watch at such contents and nowadays we are more often used to wonder if what we are seeing is real or is just a mystification. In this context of generalized disinformation, the need for reliable solutions to help common users, and not only, to make an assessment on this kind of video sequences is strongly upcoming. In this paper, a novel approach which leverages on temporal surface frame anomalies in order to reveal deepfake videos is introduced. The method searches for possible discrepancies, induced by deepfake manipulation, in the surfaces belonging to the captured scene and in their evolution along the temporal axis. These features are used as input of a pipeline based on deep neural networks to perform a binary assessment on the video itself. Experimental results witness that such a methodology can achieve significant performance in terms of detection accuracy.

Type of publication:
Conference paper

Publisher:
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

Access here

Authors:
Pascaline Gaborit

Abstract:
International organizations classify disinformation as one of the main threats to democracy and institutions for more than a decade. Digital technologies reinvent and profoundly transform modern lifestyles, citizens’ and business environments. AI is bringing a new disruption in the way we access knowledge and create, spread and understand information. It can also blur the lines between real information and manipulated information with the emergence of ‘Fake News’, automatic networks’ cross referencing, and ‘Deep Fakes’. AI systems enhance the potential for creating realistic fake content and targeted disinformation campaigns. Disinformation goes beyond simple rumors to deliberately deceive and distort evidence-based information through fabricated data. European institutions have also recently focused on the identification of disinformation linked to FIMI: Foreign Information Manipulation and Interference. The article identifies trends and concerns related to disinformation and AI. It explores the perception of disinformation, its impacts, and responses including the EU AI Act and online Platforms’ policies. It provides a first analytical approach to the topic based on the current debates by researchers, the first findings of our 2024 surveys, interviews and the analysis of hundreds of online fake news items. It attempts to understand how citizens and stakeholders perceive disinformation and identifies possible impacts. It also analyzes the current challenges and constraints, opportunities and limitations to tackle manipulation and interference. The article considers the current processes, and impacts of disinformation (2), the presentation of the main findings of our online survey on the perceptions of disinformation (3), the current EU regulatory responses (4) and the Discussion Points (5). We argue in this article that there is a gigantic change in the way that we access information, but that the responses to disinformation are still at an early stage. The article also demonstrates that there is an increased awareness in European countries about the impacts of disinformation, but also a gap between the ability to identify “fake news” and disinformation, and a limited understanding of the processes, threats, and actors involved in spreading disinformation.

Type of publication:
Journal paper

Publisher:
Journal of Political Science and International Relations

Access here