Irish Regulators Sound Alarm Over Sophisticated AI Deepfake Targeting Presidential Candidate Catherine Connolly
Dublin, Ireland – October 27, 2025 – Irish authorities and digital media experts issued urgent warnings on Sunday following the circulation of a highly convincing AI-generated deepfake video falsely depicting Galway West TD Catherine Connolly announcing her withdrawal from the ongoing presidential election. The fabricated clip, which mimicked an official RTÉ News broadcast, emerged during the final televised debate among candidates, amplifying concerns about the escalating risks posed by artificial intelligence to democratic processes.
The video, lasting approximately 45 seconds, featured a simulated RTÉ News studio setup complete with the broadcaster's logo, on-screen graphics, and a news anchor introducing the segment. In the footage, a digitally altered version of Connolly appeared to deliver a statement citing "personal reasons" for stepping down from the race, urging supporters to back an alternative candidate. The deepfake incorporated seamless lip-syncing, realistic facial expressions, and even subtle background elements from genuine RTÉ broadcasts, making it indistinguishable from authentic content to the untrained eye.
Aidan O’Brien, a researcher at the European Digital Media Observatory (EDMO) based at Dublin City University, analyzed the video and labeled it as “an exceptionally good piece of manipulated media.” In an interview with RTÉ Radio 1's Morning Ireland program, O’Brien explained that the deepfake blended archived footage of Connolly with AI-generated elements created using advanced tools like generative adversarial networks (GANs). “It was of far superior quality than any other deepfake we saw during this election,” he stated. O’Brien highlighted a particularly telling detail: the accurate pronunciation of Irish-language names and place references, such as "Gaillimh Thiar" for Galway West, which suggested the creators possessed localized knowledge or access to region-specific audio datasets. This level of nuance, he argued, elevated the video beyond generic AI outputs and indicated potential involvement of sophisticated actors familiar with Irish politics.
Barry Scannell, a partner at the law firm William Fry specializing in AI law and policy, and a member of Ireland’s Artificial Intelligence Advisory Council, echoed these sentiments. Scannell admitted that the video initially fooled him. “I actually went on the RTÉ website to verify it,” he told The Irish Times in a phone interview. Upon closer inspection using forensic tools, he identified micro-artifacts, such as inconsistent lighting reflections in the eyes and slight audio desynchronization during complex phonemes. Nevertheless, Scannell described the incident as “a wake-up call” regarding the rapid evolution of AI technologies. “Tools like these are now accessible to anyone with a mid-range computer and open-source software,” he warned, referencing platforms such as Stable Diffusion for video generation and ElevenLabs for voice cloning. He emphasized that while the deepfake targeted a relatively low-profile candidate in the presidential race—where Connolly was polling in the mid-single digits—it demonstrated how such manipulations could sway public opinion in tighter contests or disrupt voter turnout.
The timing of the video's release coincided with the RTÉ Prime Time presidential debate on Saturday evening, where Connolly participated alongside frontrunners like Peter Casey and independent candidates. Social media analytics from EDMO showed the clip first appearing on a fringe Telegram channel before being shared widely on Facebook, Instagram, and X (formerly Twitter). Within hours, it garnered over 50,000 views on Meta-owned platforms alone, with hashtags like #ConnollyWithdraws trending briefly in Irish online circles. Misinformation watchdogs noted that accompanying text falsely claimed the announcement was due to an emerging scandal involving campaign financing, further fueling speculation.
Ireland’s Electoral Commission responded with unprecedented speed. Chairperson Art O’Leary confirmed in a statement that the body had been monitoring digital threats throughout the campaign and immediately flagged the video to platform providers. “Deepfakes represent a direct assault on electoral integrity,” O’Leary said during a press briefing at the Commission's Dublin headquarters. “We escalated this to Meta, Google, and X under the EU's Digital Services Act (DSA) obligations for very large online platforms.” Meta, which operates Facebook and Instagram, complied promptly, removing the video within three hours for breaching its policy against impersonation and deceptive manipulated media. A Meta spokesperson stated: “We use a combination of AI detection models and human reviewers to identify and action such content. In this case, it violated our rules on synthetic media designed to mislead.”
However, the video persisted on X, owned by Elon Musk's US-based company. X's safety team applied a “manipulated media” label to posts containing the clip, accompanied by a community note explaining its falsified nature. An X spokesperson defended the approach, saying: “We prioritize free expression while providing context through labels, allowing users to decide for themselves.” Critics, including O’Brien, argued that labels alone are insufficient. “The rapid response was surprisingly effective in containment on some platforms,” O’Brien noted, “but once such videos spread virally, you’ll never really be able to pull them back out. The damage to trust is done.”
This incident is not isolated but part of a broader trend. Ireland's presidential election, set for November 8, 2025, has seen increased AI interference compared to previous cycles. EDMO's election monitoring dashboard reported over 20 deepfake incidents since campaigning began in September, ranging from audio clips altering candidates' speeches to image manipulations. Globally, similar events have plagued elections: in the 2024 US presidential race, deepfakes of candidates spread on TikTok, while India’s 2024 Lok Sabha polls featured AI-generated videos in regional languages. The Irish case stands out for its broadcast mimicry, exploiting trust in state media like RTÉ.
Experts urged immediate regulatory action. Scannell called for amendments to Ireland’s Online Safety Code, currently under review by Coimisiún na Meán, to mandate watermarking for AI-generated content and real-time detection APIs for platforms. “The EU AI Act, which enters full force next year, classifies deepfakes as high-risk, but we need national enforcement teeth,” he said. O’Brien advocated for public education campaigns, proposing media literacy modules in schools to teach forensic verification techniques, such as checking metadata or using tools like InVID Verification.
Catherine Connolly's campaign dismissed the video outright. In a statement posted to her official X account, she said: “This is a malicious attempt to undermine democracy. I remain fully committed to the race and urge voters to rely on verified sources.” Her team reported the incident to An Garda Síochána's cybercrime unit, which is investigating potential violations under the Harassment, Harmful Communications and Related Offences Act 2020.
As Ireland heads into the final weeks of campaigning, this deepfake saga underscores the fragile intersection of technology and politics. With voter turnout historically sensitive to late-breaking news, regulators fear copycat attacks could erode confidence in the electoral system. “Democracy thrives on truth,” O’Leary concluded. “AI must not be allowed to fracture it.” Platforms, policymakers, and citizens now face the challenge of adapting to an era where seeing is no longer believing.
