International organizations classify disinformation as one of the main threats to democracy for more than a decade now. Digital technologies relentlessly reinvent and profoundly reshape modern lifestyles, citizens’ and business environments. AI is bringing a new disruption into the way we access knowledge and create, spread and understand information, by blurring the lines between real information and manipulated information.
Digital revolution’s impact on society
The digital revolution is reaching its golden age, in continuity with previous transformations which occurred within less than a decade. By 2024, the number of world mobile phone owners forecast to reach 7,21 billion. Around 67% of the world population has currently access to the Internet – it was only 1% in 1995. Changes have loomed in quickly like a tsunami, without the possibility for people to step backwards or think them over, neither for all stakeholders to adapt skills, education courses nor to convert manifold economic sectors. These trends generated incredible optimism among European decision-makers (digitize the EU, single digital market, quick business entrepreneurs, etc.). Yet, this was echoed by latent pessimism among the situation losers who could not catch up (traditional investors, elderly people, redundant employees in automated sectors, and people with low IT skills literacy).
The advantages for consumers and business investors of the digital upheaval are colossal: direct access to worldwide information, knowledge and data is made possible. Easy-to digest knowledge is accessible almost anywhere through social media, internet search engines, selective applications but also online education. Connectivity is boundless: collaborative platforms including social media have enabled a direct link among people, but also between potential business entrepreneurs and consumers. Logistics is substantially facilitated by the possibility to order supplies, move faster or cheaper, use GPS, geo-localization and instant connections. Last but not least, digital technologies with the empowerment of AI offer a huge potential to find research solutions for health, climate and infrastructure developments, including for less advanced countries to adopt them.
This brings us to the downsides and stumbling blocks, if not threats of this digital AI revolution. Indeed, the systems developed by AI are “increasing opportunities to create realistic AI-generated fake content, but also (…) facilitating the dissemination of disinformation to a (micro) targeted audience and at scale by malicious stakeholders” (Bontridder and Poullet 2021). Concerns have been raised on copyrights, biased algorithms, business models using massive data to deceive individuals and replacement of jobs/employment by technology in numerous AI sectors. AI technologies will also facilitate the use of video, text and image, generating content based on false information, and creating difficulties for individuals and for the media to trust the information (Newman 2024).
EU’s response: regulations, cybersecurity and fact-checking
On the contrary to the vacuum created by the lack of regulations in other democratic countries, the European Union (EU), followed by EU member states, has been actively developing policies and initiatives to tackle disinformation, particularly in the context of safeguarding democratic processes, protecting citizens, and promoting media literacy. These policies and initiatives reflect the EU’s commitment to addressing the multifaceted challenge of disinformation and protecting democratic values in the digital age. These initiatives are strengthened by the approval of regulation on disinformation in most of the EU member states, and by a closer cooperation between the EU member states in cyber security and joint cooperation to counter disinformation.
Finally, we observe ongoing negotiations between the EU institutions and the main internet platforms to find the best way to tackle disinformation. The question of regulation versus auto-regulation is not over yet and will need additional developments over the next years.
But this only reflects part of the efforts to protect citizens against disinformation. The EU and the member states have also invested efforts in creating cybersecurity agencies to debunk massive disinformation campaigns, and fact checkers for the media (such as EDMO, the European Digital Media Observatory). Despite this arsenal of tools, disinformation remains a real threat.
There are blurred lines between the ‘debunking’ of disinformation done by private actors and the national security agencies. Whereas citizens are encouraged to watch videos on how to recognize misinformation and to make use of fact checkers, manipulative disinformation campaigns are still rather unveiled by national security agencies and the media. France’s Viginum agency for instance was set up in 2021 to detect digital interference from foreign entities. The agency uncovered in February 2024 more than 193 websites spreading disinformation directed through social media sites and messaging apps. It seems that even for security agencies, the characterization of the campaign’s origin is not always easy, in particular when the disinformation or misinformation needs to be traced back to foreign governments, malevolent actors or to simple individuals acting as disruptors.
In addition to this, hybrid warfare, to which disinformation is only a tool, is combining cyberattacks with massive disinformation, creating risks for malevolent influence towards the media, the governments, the public infrastructure, but also the civil society and the academic sectors. And this is where the real threat lies.
Future threats and challenges
The threat and danger of disinformation is that it makes use of polarization, emotions and stereotypes. The elements for a ‘fake news’ to become viral depend on their interaction with a number of divisive matters appealing to curiosity or to specific emotions in a society. Personalized targeting, based on personal or psychological characteristics, can be combined with Natural Language Generation tools to create content for unique users, potentially leading to active manipulation.
Furthermore, the aggressive automated dissemination of disinformation just before the start of a political campaign can negatively impact election results (Wade 2018). “Relying on the collection and manipulation of users’ data in order to anticipate and influence voters’ political opinions and election results, user profiling and micro-targeting may pose a threat to democracy, public debate, and voters’ choices” (Kertysova 2018, Mont’Alverne et al. 2024). This point is extremely important as this year of 2024 will be an election year for half of the world’s population, and the interferences are currently suspected but not visible, since evidence is difficult to find.
The digital revolution, including AI, is shaping a future filled with uncertainties, requiring new competencies like e-management and IT skills. While technology opens new opportunities and can lead to positive transformations in various sectors such as health, the economy or civil society empowerment, it also contributes to the erosion of authority and values, which are replaced by endless access to information and connectivity, including fake and manipulated content.
Limits may also be necessary to prevent algorithms or bots dictating our lifestyles and creating space for the manipulation of elections. A ‘great firewall’ should protect Europe against cyber interferences and manipulated content. Maintaining stability and trust in institutions, democratic systems, the media and economic structures will become a challenge in this rapidly changing landscape. Cybersecurity and the fight against disinformation should be among the highest priorities.
References
- Noémi Bontridder and Yves Poullet Y, 2021, The role of artificial intelligence in disinformation. Data & Policy, 3: e32. doi:10.1017/dap.2021.20
- Katarina Kertysova, 2018, Artificial Intelligence and Disinformation How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered, security and human rights 29 (2018) 55-81
- Camila Mont’Alverne, Sumitra Badrinathan, Amy, Ross Arguedas, Benjamin Toff, Richard Fletcher, and Rasmus Kleis Nielsen, 2024 The Trust Gap: How and Why News on Digital Platforms Is Viewed More Sceptically Versus News in General, Reuters Institute https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-09/MontAlverne_et_al_The_Trust_Gap.pdf
- Nic Newman, 2024 ‘Digital News Project: Journalism, Media and Technology: Trends and Prediction’, Oxford University, Reuters Institute, https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2024-01/Newman%20-%20Trends%20and%20Predictions%202024%20FINAL.pdf
- Michael Wade, “Psychographics: The Behavioural Analysis That Helped Cambridge Analytica Know Voters’ Minds,” The Conversation, March 21, 2018, https://theconversation .com/psychographics-the-behavioural-analysis-that-helped-cambridge-analytica-know -voters-minds-93675
Download full article in pdf here.
ABOUT PASCALINE GABORIT
Pascaline Gaborit is a researcher, consultant, expert and trainer. She holds a PhD in political science on the topic ‘Trust and Conflicts’. She is the founder of the think tank/NGO Pilot4DEV and works as a consultant on the evaluation/quality assessment of different international programs and projects.
She published books and articles on international cooperation, resilience, culture, development, gender equality and climate adaptation. She was the director of the Pilot Cities international network for more than 12 years and worked as an adviser for other think tanks.
She currently works on projects related to research and action on resilience, sustainability, climate adaptation, dialogue, fake news, artificial intelligence, human rights and democracy.