Scientific Publications
Authors:
Axel Brando, Lucas Berry, David Mege
Abstract:
Generative diffusion models, notable for their large parameter count (exceeding 100 million) and operation within high-dimensional image spaces, pose significant challenges for traditional uncertainty estimation methods due to computational demands. In this work, we introduce an innovative framework, Diffusion Ensembles for Capturing Uncertainty (DECU), designed for estimating epistemic uncertainty for diffusion models. The DECU framework introduces a novel method that efficiently trains ensembles of conditional diffusion models by incorporating a static set of pre-trained parameters, drastically reducing the computational burden and the number of parameters that require training. Additionally, DECU employs Pairwise-Distance Estimators (PaiDEs) to accurately measure epistemic uncertainty by evaluating the mutual information between model outputs and weights in high-dimensional spaces. The effectiveness of this framework is demonstrated through experiments on the ImageNet dataset, highlighting its capability to capture epistemic uncertainty, specifically in under-sampled image classes.
Type of publication:
Conference paper
Publisher:
The 40th Conference on Uncertainty in Artificial Intelligence
Authors:
Coccomini D.A., Caldelli R., Gennaro C., Fiameni G., Amato G., Falchi F.
Abstract:
In recent times, the increasing spread of synthetic media, known as deepfakes has been made possible by the rapid progress in artificial intelligence technologies, especially deep learning algorithms. Growing worries about the increasing availability and believability of deepfakes have spurred researchers to concentrate on developing methods to detect them. In this field researchers at ISTI CNR’s AIMH Lab, in collaboration with researchers from other organizations, have conducted research, investigations, and projects to contribute to combating this trend, exploring new solutions and threats. This article summarizes the most recent efforts made in this area by our researchers and in collaboration with other institutions and experts.
Type of publication:
Conference paper
Publisher:
CEUR Workshop Proceedings
Authors:
Stanciu, Cristian and Ionescu, Bogdan and Cuccovillo, Luca and Papadopoulos, Symeon and Kordopatis-Zilos, Giorgos and Popescu, Adrian and Caldelli, Roberto
Abstract:
Synthetic media generation and manipulation have seen rapid ad- vancements in recent years, making it increasingly easy to create multimedia content that is indistinguishable to the human observer. Moreover, generated content can be used maliciously by individ- uals and organizations in order to spread disinformation, posing a significant threat to society and democracy. Hence, there is an urgent need for AI tools geared towards facilitating a timely and ef- fective media verification process. The MAD’24 workshop seeks to bring together people with diverse backgrounds who are dedicated to combating disinformation in multimedia through the means of AI, by fostering an environment for exploring innovative ideas and sharing experiences. The research areas of interest encompass the identification of manipulated or generated content, along with the investigation of the dissemination of disinformation and its societal repercussions. Recognizing the significance of multimedia, the workshop emphasizes the joint analysis of various modalities within content, as verification can be improved by aggregating multiple forms of content.
Type of publication:
Conference paper
Publisher:
Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation
Authors:
Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Abstract:
Looking at a video sequence where a foreground person is represented is not as time ago anymore. Deepfakes have revolutionized our way to watch at such contents and nowadays we are more often used to wonder if what we are seeing is real or is just a mystification. In this context of generalized disinformation, the need for reliable solutions to help common users, and not only, to make an assessment on this kind of video sequences is strongly upcoming. In this paper, a novel approach which leverages on temporal surface frame anomalies in order to reveal deepfake videos is introduced. The method searches for possible discrepancies, induced by deepfake manipulation, in the surfaces belonging to the captured scene and in their evolution along the temporal axis. These features are used as input of a pipeline based on deep neural networks to perform a binary assessment on the video itself. Experimental results witness that such a methodology can achieve significant performance in terms of detection accuracy.
Type of publication:
Conference paper
Publisher:
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Authors:
Pascaline Gaborit
Abstract:
International organizations classify disinformation as one of the main threats to democracy and institutions for more than a decade. Digital technologies reinvent and profoundly transform modern lifestyles, citizens’ and business environments. AI is bringing a new disruption in the way we access knowledge and create, spread and understand information. It can also blur the lines between real information and manipulated information with the emergence of ‘Fake News’, automatic networks’ cross referencing, and ‘Deep Fakes’. AI systems enhance the potential for creating realistic fake content and targeted disinformation campaigns. Disinformation goes beyond simple rumors to deliberately deceive and distort evidence-based information through fabricated data. European institutions have also recently focused on the identification of disinformation linked to FIMI: Foreign Information Manipulation and Interference. The article identifies trends and concerns related to disinformation and AI. It explores the perception of disinformation, its impacts, and responses including the EU AI Act and online Platforms’ policies. It provides a first analytical approach to the topic based on the current debates by researchers, the first findings of our 2024 surveys, interviews and the analysis of hundreds of online fake news items. It attempts to understand how citizens and stakeholders perceive disinformation and identifies possible impacts. It also analyzes the current challenges and constraints, opportunities and limitations to tackle manipulation and interference. The article considers the current processes, and impacts of disinformation (2), the presentation of the main findings of our online survey on the perceptions of disinformation (3), the current EU regulatory responses (4) and the Discussion Points (5). We argue in this article that there is a gigantic change in the way that we access information, but that the responses to disinformation are still at an early stage. The article also demonstrates that there is an increased awareness in European countries about the impacts of disinformation, but also a gap between the ability to identify “fake news” and disinformation, and a limited understanding of the processes, threats, and actors involved in spreading disinformation.
Type of publication:
Journal paper
Publisher:
Journal of Political Science and International Relations
Authors:
Tomas Garriga, Gerard Sanz, Eduard Serrahima de Cambra, Axel Brando
Abstract:
In this paper, we present a novel application of structural causal models (SCMs) and the abduction-action-prediction procedure to a time series setting in the context of a real world problem in the pharmaceutical industry. We aim to estimate counterfactuals for the sales volume of a drug that has been impacted by the entry to the market of a competitor generic drug. We employ encoder-decoder based architectures, applying a conditional variational autoencoder and also introducing the use of conditional sparse autoencoders, which had never been used in counterfactual literature. The proposed methodology requires availability of historical event and event-less time series and has the advantage of not relying on control covariates that may be unavailable, while clearly outperforming the basic counterfactual estimate of a forecast. We evaluate our approach using our company’s real-world sales dataset, as well as synthetic and semi-synthetic datasets that mimic the problem context, demonstrating its effectiveness. We have successfully applied this model in our company, providing useful information for business planning, investment allocation and objectives setting.
Type of publication:
Workshop article
Publisher:
NeurIPS’24 Workshop on Causal Representation Learning
Authors:
Anastasios Manos, Despina Elisabeth Filippidou, Nikolaos Pavlidis, Georgios Karanasios, Georgios Vachtanidis, Arianna D’Ulizia
Abstract:
The rise of digital platforms has facilitated the rapid spread of disinformation, which poses significant social, political, and economic challenges. Knowledge graphs (KGs) are emerging as effective tools for enhancing the accuracy, interpretability, and scalability of fake news detection systems, addressing limitations in traditional machine learning-based approaches that rely pri-marily on linguistic analysis. This work contains a literature review that synthesizes findings from recent studies on the application of KGs in disinformation detection. We identify how KGs improve detection by encoding real relationships, analyzing context, and enhancing model interpretability, while also discussing current limitations in scalability, data completeness, and contextual adaptability. The reviewed studies underscore the need for future research focusing on scalable, real-time, and cross-linguistic KG models to bolster disinformation detection capabilities globally. Moreover, we present preliminary results of two use cases, showcasing a methodology for constructing KGs that can serve as useful tools to fight against disinformation spread.
Type of publication:
Conference paper
Publisher:
2024 International Conference on Engineering and Emerging Technologies (ICEET)
Authors:
Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Abstract:
This paper explores the European Union’s multifaceted response to the pervasive issue of disinformation, a challenge that has intensified since the annexation of Crimea in 2014. Disinformation poses significant threats to democratic processes and public welfare. The European Union’s approach combines regulatory measures, strategic partnerships, and media literacy initiatives to address this phenomenon while safeguarding core democratic principles, such as freedom of expression. Key measures include the Code of Practice on Disinformation and the Digital Services Act, which aim to hold digital platforms accountable and ensure transparency. Furthermore, initiatives such as the East StratCom Task Force and the Rapid Alert System highlight the European Union’s efforts to counter disinformation as a tool of hybrid warfare. This paper also emphasizes the critical role of citizens, whom the European Union seeks to empower through media literacy programs, enabling them to recognize and resist manipulative content. By examining the interactions between government actions, private sector involvement, and citizen engagement, this study provides a comprehensive analysis of the European Union’s strategy against disinformation and assesses the challenges and future directions necessary to sustain democratic resilience in an evolving digital landscape.
Type of publication:
Scientific article
Publisher:
Information Polity
Authors:
Alessia D’Andrea, Giorgia Fusacchia, Arianna D’Ulizia
Abstract:
Purpose
Using a multidisciplinary approach, this study aims to trace the path of disinformation campaigns from their detection by linguistic cues of credibility to furtherance through the dissemination mechanisms, and lastly, assessing their impact on the socio-political context.
Design/methodology/approach
This study provides an in-depth overview of four fundamental aspects of disinformation: the linguistic features that distinguish content designed to deceive and manipulate public opinion, the media mechanisms that facilitate its dissemination by exploiting the cognitive processes of its audience, the threats posed by the increasing use of generative artificial intelligence to spread disinformation and the broader consequences these disinformation dynamics have on public opinion and, consequently, on political decision-making processes.
Findings
As a result, the paper provides an interdisciplinary and holistic examination of the phenomenon, referring to its pluralized elements to highlight the importance of platform responsibility, media literacy campaigns among citizens and interactive cooperation between private and public sectors as measures to enhance resilience against the threat of disinformation.
Originality/value
The study highlights the need to increase platform accountability, promote media literacy among individuals and develop cooperation between the public and private sectors. Strengthening resilience to disinformation and ensuring the EU’s adaptability in the face of changing digital threats are the goals of this integrated strategy. Ultimately, the paper advocates a fair and open strategy that protects freedom of expression and strengthens democratic institutions at a time when digital disinformation is on the rise.
Type of publication:
Scientific article
Publisher:
Journal of Information, Communication and Ethics in Society (2025)
Authors:
Eddie Conti, Alejandro Astruc, Álvaro Parafita, Axel Brando
Abstract:
Understanding how information propagates through Transformer models is a key challenge for interpretability. In this work, we study the effects of minimal token perturbations on the embedding space. In our experiments, we analyze the frequency of which tokens yield to minimal shifts, highlighting that rare tokens usually lead to larger shifts. Moreover, we study how perturbations propagate across layers, demonstrating that input information is increasingly intermixed in deeper layers. Our findings validate the common assumption that the first layers of a model can be used as proxies for model explanations. Overall, this work introduces the combination of token perturbations and shifts on the embedding space as a powerful tool for model interpretability.
Type of publication:
Workshop article
Publisher:
IJCAI 2025 Workshop on Explainable Artificial Intelligence
Authors:
Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Abstract:
Disinformation is always more induced by resorting to altered or full synthetically generated realistic contents generally known as deepfakes. This poses the need of reliable instruments able to reveal such phony contents to avoid the polarization of persons’ thinking. Diverse techniques have been presented, whose common denominator is to search for some inconsistencies induced by the fake generation process and evidenced by looking at specific characteristics. It seems that the possible combination of various distinctive features could provide more effective achievements. In this view and following our previous study on this matter, we propose to investigate a new feature which basically takes into account of the built-in relations existing within the entire acquisition scene at the time of image (video) capturing and then represented by the image pixel values. The idea behind is that deepfake generation process can break such intrinsic dependencies among illumination (intensity and direction), object surfaces (position and reflectance) and acquisition noise, thus evidencing a possible manipulation. According to this, a feature descriptor, named Local Surface Descriptor, has been constructed and used to train CNN-based models for deepfake detection. Experimental results conducted on different operative contexts demonstrate a significant detection accuracy up to 90%.
Type of publication:
Conference paper
Publisher:
2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), Abu Dhabi, United Arab Emirates, 2024
Authors:
Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Abstract:
The actual capacity to AI-generate realistic fully synthetic images is day-by-day improving and this is particularly true for pictures representing human faces that appear indistinguishable from real people. This poses the crucial need to develop instruments able to discern between true and do not existing people by detecting some eventual inconsistencies embedded within the images during the generation process. The main difference between a pristine picture and a deepfake generated one is that, in the second case, there has not been an effective camera acquisition; so all the various interrelationships among the elements belonging to the scene (lights, reflectance, object respective positions in the 3D space) are not taken by the real world in that precise time instant but just artificially reproduced. According to such consideration, in this work, we introduce local camera surface frames as a possible mean to represent these specific environmental characteristics in order to highlight differences. The experimental analysis carried out has witnessed that this feature can grant a very high level of accuracy and a significant degree of generalization.
Type of publication:
Conference paper
Publisher:
2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), Abu Dhabi, United Arab Emirates, 2024
Authors:
Andrea Ciamarra, Roberto Caldelli, Alberto Del Bimbo
Abstract:
Extraordinary unreal images can be realised with pow-erful AI techniques. Various tools available to everyone are able to recreate high quality contents, especially generating entire fully synthetic images. Among the existing architectures, diffusion-based models can easily produce any kind of images, including human facial images, by giving a prompt like a text. Such false contents are often used to spread dis-information and this raises concerns about people security. At the present, it is getting hard to develop reliable instruments to distinguish between real and generated (even non-existing) people. Moreover, the large amount of diffusion-based implementations poses the problem for such detectors to generalise on novel generative techniques. To address these issues, we propose to investigate the capacity of a distinctive feature, based on the image acquisition environment, to individuate diffusion-based face images from the pristine ones. In fact, generated images should not contain the characteristics that are proper of the acquisition phase performed through a real camera. Such inconsistencies can be highlighted by means of recently introduced local surface frames. This feature takes into account objects and surfaces involved in the scene, which all impact the camera acquisition process, along with further intrinsic information tied to the device, as well as lighting and reflections affecting the entire scenario. The paper explores the ability of this feature to generalise towards different datasets and new generative methods unknown during training. Experimental results highlight that such a feature still provides significant levels of detection accuracy also in these cases.
Type of publication:
Conference paper
Publisher:
2024 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), Abu Dhabi, United Arab Emirates, 2024