LAMPA conversation festival

Media in the age of AI: main takeaways from LAMPA Conversation Festival

On July 6, Prof. Inna Šteinbuka from the University of Latvia, our project coordinator, participated in a thought-provoking panel discussion at LAMPA Conversation Festival, titled “The threat of artificial intelligence to secure information will not be stopped even by reliable media. Do you (dis)agree?”.

LAMPA Conversation Festival takes place in Cēsis and is renowned for fostering open dialogue and exchange of ideas in a relaxed atmosphere, debating pressing topics for Latvia, Europe, and the world.

Organized by LSM.lv, the unified news portal of Latvia’s public television, radio and online services, this panel discussion focused on the challenges and opportunities posed by AI in the context of information security and media trustworthiness.

The conversation was moderated by LSM.lv’s news editor Edgars Zicāns, and featured distinguished panelists from various fields:

  • Inna Šteinbuka, head of the AI4Debunk project and professor at the University of Latvia
  • Jānis Sārts, director of the NATO Strategic Communication Centre of Excellence
  • Andrejs Judins, member of the Saeima and chairman of the Saeima Legal Commission
  • Kārlis Miksons, social media editor of the Latvian Television news service and member of the European Broadcasting Union’s social media expert group
  • Kristiāna Grāmatiņa, manager of the podcast “Is it easy to be?” related to information technology

Edgars Zicāns

Inna Šteinbuka

Jānis Sārts

Andrejs Judins

Kārlis Miksons

Kristiāna Grāmatiņa

The panelists engaged in a very insightful discussion, addressing key topics such as:

 

1. AI’S ROLE IN DISINFORMATION

AI technologies are increasingly sophisticated in creating and spreading disinformation, from imitating influential figures to generating realistic fake videos. This capability challenges our ability to trust what we see and hear.

"I would like to remind you of one saying that the well-known writer Mark Twain once said - he said that it is easier to deceive a person than to convince him that he has been deceived. He lived a long time ago, but I think if he lived now, his quote would be even more relevant, because now we ourselves don't know whether we can believe our eyes and ears or not."

2. CRITICAL THINKING AND TRUST

The speakers emphasized the importance of critical thinking. People who think critically and understand context are harder to mislead. However, even educated individuals can fall for conspiracy theories, which highlights the pervasive challenge of misinformation and disinformation.

3. HUMAN PERCEPTION AND AI

Jānis Sārts from NATO’s Strategic Communication Centre of Excellence noted that human brains, which are easily deceived by magic tricks, are similarly susceptible to AI-generated content. He stressed the need for using AI to counteract misinformation and disinformation.

"I love that quote that says the problem with our times is that we have Paleolithic brains, medieval structures, and godlike technology, and it just doesn't go well together. (...) Then the question is, how do we deal with it? I think there's only one answer. The same artificial intelligence is what you need to bring into your gear to tackle this problem. Because, sorry, our Paleolithic brain is not suitable for that.”

4. LEGAL AND REGULATORY MEASURES

Andrejs Judins from Saeima’s Legal Commission discussed evolving legal frameworks to tackle AI’s role in political disinformation, including potential criminal liability for deep fakes during elections.

5. EDUCATIONAL IMPERATIVE

The panel underscored the necessity of integrating AI education into the school curriculum to prepare future generations to navigate and utilize these technologies effectively.

“(...) As with anything else technological, those who learn first will gain a lot and there will be many who will be left behind, without work, career opportunities and so on. (...) This also applies to the school - we must integrate artificial intelligence into our education system. (...) If we want our young people to be able to cope with this in the future, we have to train them to use these abilities almost automatically.”

6. COLLECTIVE RESPONSIBILITY

While individuals must learn to critically analyze information, there is also a collective responsibility. This includes regulation of major social media platforms to curb the spread of misinformation and disinformation.

"We need to be able to regulate these giants because they are the primary platforms where misinformation spreads very quickly and reaches people (...) We can each be responsible for ourselves. We can learn to critically analyze information as much as we can, but it's kind of a collective thing.”

7. MEDIA AND AI

The implications of AI for the future of media were also debated. Traditional media may need to adapt by integrating AI tools to maintain credibility and effectiveness in delivering accurate information.

8. PUBLIC ADOPTION OF AI

A survey by research center Norstat and LSM.lv revealed that 44% of Latvian residents have used AI tools, with younger populations more likely to have engaged with these technologies. This reveals a growing familiarity and reliance on AI across different demographics.

9. AI4DEBUNK PROJECT

Inna Šteinbuka from the University of Latvia informed the audience about AI4Debunk’s goals. The project focuses on developing human-friendly tools to debunk disinformation, emphasizing the necessity of creating accessible and effective methods for the public to identify and combat false information.

Photo credits: Karīna Egliena and Kristaps Briedis

In a nutshell, the discussion brought to light the dual-edged nature of AI in media and information security, advocating for a balanced approach leveraging AI’s benefits while mitigating its risks through education, regulation, and critical thinking.

🎦 Watch the recorded panel discussion (in Latvian) here.

🔤 A written summary is available below the video. To read it in English, enable the automatic translation feature in your browser.

More from AI4Debunk

Pilot4DEV hosted a multi-stakeholder meeting, bringing together leading experts, researchers, and practitioners to explore how our AI tools can be designed to effectively tackle disinformation.
Pilot4DEV hosted a multi-stakeholder meeting, bringing together leading experts, researchers, and practitioners to explore how our AI tools can be designed to effectively tackle disinformation.
Georgi Gotev, founder and senior editor at EURACTIV Bulgaria, shares his personal experiences battling fake news as a journalist over the years, exploring the specific disinformation landscape in Bulgaria.