The West must learn from Ukraine’s AI war for truth

Without waiting for global standards, Ukraine has built agile AI-powered defences that the West should study, replicate, and scale

Imagine an enemy that never sleeps, costs pennies to operate, and can rewrite reality faster than any human fact-checker. That is the weapon Russia has built with artificial intelligence: a global disinformation machine that churned out 3.6 million propaganda articles in 2024 alone, seeded across 150 fake news sites in 49 countries.

In early 2025, reports emerged detailing a sophisticated Russian strategy to leverage artificial intelligence for disinformation, not just to target humans directly, but to subtly corrupt the training data and real-time retrieval processes of Western AI systems.

This tactic, dubbed “LLM grooming”, involves mass-producing and disseminating thousands of AI-generated articles laced with pro-Kremlin falsehoods. The goal? Ensure these narratives seep into the datasets that power large language models (LLMs), leading chatbots like ChatGPT, Gemini, and Grok to unwittingly amplify Russian propaganda in their responses.

At the heart of this campaign is the “Pravda” network (Russian for “truth”), a sprawling, Moscow-based operation launched in April 2022, shortly after Russia’s full-scale invasion of Ukraine.  It repurposes state media, rewrites Western sources, and floods the web with Kremlin lies – U.S. bioweapons labs, Zelenskyy embezzling aid, NATO provoking war – until they become the dominant signal in search results and AI training data.

A March 2025 NewsGuard audit found that all ten major chatbots repeated Pravda-sourced falsehoods 33% of the time, with contamination soaring on niche queries where credible sources are scarce. This is not collateral damage. It is data poisoning as strategy, and it is working.

The West has responded – belatedly – with regulation, tech filters, and takedowns. The EU’s AI Act and Digital Services Act demand transparency. OpenAI and Google have hardened retrieval-augmented generation (RAG) to flag RT and Sputnik. Chatbot citation of Russian sources has dropped from 33% to 15–20%. These are real gains. But they are reactive, built on the assumption that the battlefield is still the open internet, where rules and audits can keep pace.

Ukraine knows better. For three years, it has fought this war in real time, under fire, with sirens in the background. And it is winning – not by waiting for global standards, but by building agile, ruthless, AI-powered defences that the West should study, replicate, and scale.

Start with the Center for Countering Disinformation (CCD), a 24/7 fusion cell under Ukraine’s National Security and Defense Council. By mid-2025, it had catalogued 191 AI-driven Russian operations, from deepfake Zelenskyy videos to synthetic audio of generals surrendering. Using machine learning, the CCD doesn’t just react – it predicts. It scans 1,000+ sources in multiple languages, spots synchronized bot surges, and issues “prebunks” before lies go viral.

There are the startups born in bomb shelters. Osavul, launched days into the invasion, uses natural language processing to map disinformation networks across TikTok, Telegram, and X, alerting journalists in seconds. It unmasked the largest AI-narrated propaganda op ever—TikTok videos in seven languages accusing Ukrainian leaders of corruption. Mantis Analytics geolocates combatants from scraped imagery, debunking Russian “fact dilution” on war crimes. Even controversial tools like Clearview AI are deployed—not for surveillance, but to unmask propagandists behind deepfakes, like the 2023 fake Poroshenko Zoom call meant to spark unrest.

This is not theoretical. It is battle-tested. Ukraine’s May 2025 hackathon, “AI for Ukraine Recovery,” produced tools now used by frontline media to verify strikes and counter cyber alerts. The Kyiv Independent’s OSINT desk uses AI to build counter-narratives that reach millions. And Ukraine’s Law on Countering Disinformation – updated in 2024 – balances security with free speech, offering a model the West still debates in committees.

The contrast is stark. While Brussels drafts 400-page regulations and Silicon Valley A/B tests filters, Ukraine operates with wartime urgency. It doesn’t wait for consensus – it iterates, deploys, learns. When Russia adapts, Ukraine’s edge shrinks from months to days. But it never stops moving.

The West’s mistake is treating this as a tech problem – a bug to patch with audits and blockchain. Ukraine sees it as war. And in war, you don’t win by being fair. You win by being faster, smarter, and more ruthless with truth.

Here’s what the West must borrow:

  • Build fusion cells, not working groups. The CCD model—government, startups, media, OSINT—should be replicated in every NATO capital.
  • Weaponize prebunking. Ukraine inoculates audiences before lies spread. The U.S. and EU still chase virality after the fact.
  • Embrace controversial tools. Clearview AI has flaws, but in Kyiv, it unmasks liars. Perfect is the enemy of survival.
  • Regulate with teeth, not paper. Ukraine’s 2024 law forces platforms to act. The DSA’s 6% fines are a start—enforce them.
  • Fund the frontline. Ukraine’s startups run on grit and grants. The West has billions for AI safety—redirect 1% to Kyiv’s proven labs.

Russia’s playbook – scalable, cheap, undetectable – will be copied. The next target won’t be Ukraine. It will be your election, your pandemic response, your climate debate. AI will amplify the loudest liar unless we learn from the country that has stared into the abyss and built a mirror.

Ukraine is not asking for pity. It is offering a masterclass. The West should stop regulating and lecturing and start listening – and acting.

The text in this blog post has been retrieved from the EUalive Article available at https://eualive.net/the-west-must-learn-from-ukraines-ai-war-for-truth/

ABOUT GEORGI GOTEV

Georgi Gotev is a journalist specialised in EU affairs, active in Bulgaria in the 1990s and in Brussels from 2007 to the present. In 2025, he founded EUalive.

AI4Debunk is funded under the Horizon Europe programme (Grant Agreement No. 101135757) and brings together multidisciplinary partners working to equip European citizens with the knowledge, frameworks, and critical tools needed to navigate today’s complex information landscape.

More from AI4Debunk

In her 2025 State of the Union, President von der Leyen reaffirmed the EU’s fight against disinformation, a mission we've supported at AI4Debunk for over two years!
AI4Debunk is proud to co-host this event that will explore the evolving threat of disinformation and the role of AI in addressing it, closely aligned with EU priorities on digital
Introducing our new webinar series - Disinfo Talks - designed to raise awareness about the dangers of disinformation, fake news, and propaganda, while highlighting the important role AI can play