DEVELOPING
2025 has seen AI-generated deepfakes target elections across Ireland, Poland, Romania, Czech Republic, and beyond—from fabricated candidate withdrawals to fake investment schemes. Deepfake-driven fraud caused $200+ million in losses in Q1 2025 alone. Yet democracy is adapting: Ireland's Catherine Connolly won despite a viral deepfake claiming she quit, Poland deployed a €2 billion "cyber umbrella", and the FCC issued a $6 million fine for the Biden robocall. The threat is real, but so is the resistance.
AI-generated content now comprises 52% of all online content as of May 2025, up from minority status just 18 months ago. This explosion has transformed election interference from crude propaganda to sophisticated synthetic media that can fabricate candidate statements, fake news broadcasts, and viral investment scams. This report examines the most significant election deepfakes of 2024-2025, the "liar's dividend" phenomenon that lets politicians dismiss real evidence as fake, and the emerging regulatory and technological countermeasures. The picture that emerges is neither apocalyptic nor reassuring—it's a new battleground where democracy's survival depends on rapid adaptation.
Ireland: The Deepfake That Failed
On October 22, 2025—less than four days before Ireland's presidential election—a fabricated RTÉ News bulletin appeared on Facebook. The AI-generated video showed deepfake versions of newsreader Sharon Ní Bheoláin and political correspondent Paul Cunningham announcing that candidate Catherine Connolly had withdrawn from the race. [1]
The fake Connolly declared: "It is with great regret that I announce the withdrawal of my candidacy and the ending of my campaign." The fabricated Cunningham added that Friday's election was "now cancelled" and rival Heather Humphreys would "become the winner automatically." [2]
The video spread for 12 hours, accumulating nearly 30,000 views and 200 shares before Meta removed it after being contacted by journalists. The account "RTÉ News AI" violated community standards against impersonation. [13]
Connolly responded forcefully: "The video is a fabrication. It is a disgraceful attempt to mislead voters and undermine our democracy." She filed a complaint with Ireland's Electoral Commission. [1]
The outcome: Connolly won the election on October 24, 2025, with 38% support—nearly double her nearest rival. The deepfake failed to change the result, but exposed gaps in EU-wide regulation. [10]
Poland: The €2 Billion Defense
Poland's May 2025 presidential election became a test case for proactive defense against AI disinformation. Shortly after the first round, AI-generated images appeared in 4 of 23 viral videos alleging voter fraud—a coordinated attempt to delegitimize results. [9]
The tightly contested race between pro-EU Warsaw Mayor Rafał Trzaskowski (31.36%) and nationalist historian Karol Nawrocki (29.54%) drew intense foreign attention. Days before voting, Russian hackers attacked websites of Poland's ruling Civic Platform party. Belarus's state radio conducted influence operations promoting pro-Russian candidates. [3]
Poland's response was unprecedented: the NASK national security agency deployed a "cyber umbrella" initiative as part of a €2 billion cybersecurity investment. The platform enabled citizens to report disinformation in real-time and access verified election information. [3]
Researchers identified misleading narratives claiming "Polish soldiers already fighting in Ukraine" and depicting the EU as a "broken institution." The goal: create information chaos to demotivate participation. [3]
Romania: The Annulled Election
Romania offers the starkest example of deepfake-enabled election manipulation. The country's November 2024 presidential election was annulled after declassified intelligence revealed that a "state actor" had coordinated a TikTok campaign for first-round winner Călin Georgescu, a little-known outsider with extreme views. [11]
A UK-based analysis found more than 32,000 inauthentic videos—many AI-generated or duplicated—promoting pro-Russian candidates in what it deemed a coordinated influence campaign targeting Romania's expatriate voters. [12]
In the rescheduled May 2025 elections, scammers exploited political interest by distributing deepfakes of candidates George Simion and Nicușor Dan promoting a fake "Neptun Deep" investment scheme—promising RON 9,000 monthly income with just a RON 1,400 "activation" payment. [4]
Romania currently has no dedicated law against such cyber scams. A draft "Deep Fake Law" remains unvoted. [4]
The Biden Robocall: Setting Precedent
The first major deepfake targeting U.S. national politics came in January 2024, when thousands of New Hampshire voters received AI-generated robocalls in President Biden's voice urging them to skip the Democratic primary. [5]
The fabricated message implied that voting in the primary would prevent participation in the general election—a voter suppression tactic using falsified caller ID information. [5]
The FCC moved aggressively. Political consultant Steve Kramer was hit with a $6 million fine and faces 26 criminal counts of voter intimidation. Telecom company Lingo, which transmitted the calls, agreed to a $1 million settlement. [6]
FCC Chairwoman Jessica Rosenworcel's message was clear: "If you flood our phones with this junk, we will find you and you will pay." The FCC subsequently ruled that AI-generated voices in robocalls are illegal. [6]
The Liar's Dividend
Perhaps more insidious than deepfakes themselves is the "liar's dividend"—the phenomenon where increased awareness of synthetic media enables politicians to falsely claim authentic content is AI-generated. [7]
The Brennan Center documents multiple instances: a Spanish foreign minister dismissed police violence images as "fake photos"; an American mayor called verified audio recordings of himself making derogatory comments "phony, engineered tapes"; an Indian politician claimed embarrassing audio was AI-generated when researchers confirmed authenticity. [7]
As public awareness of deepfakes grows, the paradox deepens: false claims of artificiality become more persuasive, not less. A 2023 YouGov survey found 85% of respondents are "very concerned" or "somewhat concerned" about misleading deepfake content—creating fertile ground for denialism. [7]
Beyond Elections: The Broader Threat
Election deepfakes are part of a larger crisis. Deepfake-driven fraud caused more than $200 million in financial losses in Q1 2025 alone. [9]
By May 2025, AI-generated content had overtaken human-made content online, reaching 52% of all content—up from minority status in November 2024. In 2024, more than 80% of countries with elections experienced observable instances of AI usage relevant to their electoral processes. [9]
The most common use wasn't sophisticated deception but "polluting of the information ecosystem"—AI memes openly shared by politicians and supporters, creating confusion rather than specific false beliefs. As one researcher noted: creators preferred memes over deepfakes to avoid defamation lawsuits. [8]
What's Working
Despite the onslaught, democracy is adapting. The Brennan Center recommends a multi-pronged approach: [7]
Content Provenance Technology: Tamper-proof signatures in metadata can authenticate when and where content originated—a digital chain of custody.
Public Education: Teaching voters about the liar's dividend concept and strengthening ability to discern truth from falsehood.
Establishing Norms: Creating social disapproval of false artificiality claims through political parties and public figures.
Rapid Response: The EU's Digital Services Act obligates platforms to protect election integrity. On November 5, 2025, the European Commission announced a new voluntary code for marking AI-generated content in machine-readable formats. [14]
Election deepfakes are neither a hypothetical threat nor an unstoppable force. In Ireland, a candidate won despite a viral deepfake claiming she quit. In the U.S., the first major political deepfake resulted in a $6 million fine and criminal charges. In Poland, proactive defense blunted foreign interference. The evidence suggests that rapid response, legal consequences, and public awareness can contain the damage—but only if democracies invest in the infrastructure to fight back. The alternative isn't just misinformation; it's the collapse of shared reality itself.