Sophisticated Deepfakes Are Here To Stay
Catherine Connolly's is just the first and they will be impossible to police.
Just days before voters go to the polls, a disturbing campaign of disinformation has targeted the Irish presidential election. This wasn’t just the usual political mudslinging or online ranting; it was a sophisticated, AI-generated assault designed to impersonate one of the nation’s most trusted institutions: RTÉ News. Or at least it used to be one of the nation’s most trusted institutions, but that’s for another day.
The Deepfake videos, which circulated on Facebook and YouTube, were chillingly convincing. They featured deepfake versions of respected presenters like Sharon Ní Bheoláin and Claire Byrne, using the familiar cadence and setting of a real news bulletin to spread blatant falsehoods. One claimed Left-Wing, Catherine Connolly, had withdrawn. Another AI-generated video has a fake Claire Byrne announcing that 56% of the Presidential ballots are spoiled. Not just a lie, but a profoundly cynical one; it doesn’t invent a winner, it invents a collective, nihilistic refusal to participate. The most grotesque is the deepfake of RTE’s Vivienne Traynor introducing a report where a fabricated feminist says Connolly “belongs in a nursing home.” It’s not just political sabotage; it’s a kind of psychic violence, a gleeful desecration of the entire human theatre, ambition, dignity, age itself—reduced to a glitch in a server farm. The intent was clear: to create chaos, erode trust in the electoral process, and deceive voters by weaponising the credibility of RTE.
Meta and YouTube have performed their exorcisms. The videos are gone. But this is no victory. You can’t kill what was never alive. These unalive digital things don’t have a belief, a cause, or a reason. They are pure, self-replicating nonsense, a cognitive parasite. The videos may have been scrubbed publicly, but I was sent the video 7 times privately before I’d had my first coffee and smoke or even opened a laptop. This incident is not an anomaly; it is a blueprint for the future of electoral interference. The accessibility of artificial intelligence has mass weaponised the tools of deception, making it possible for bad actors to produce convincing forgeries at scale. The question is no longer if this will happen again, but when and can we defend against it? They aren’t trying to convince you of a different truth; they are trying to convince you that the truth is no longer an option.
As a Western liberal society, we are fundamentally unprepared. Our current strategy is reactive, not proactive. Platforms scramble to remove content after it has already been seen and shared. This “whack-a-mole” approach is a losing battle. The damage is done the moment a deepfake is forwarded into a family WhatsApp group or shared on a community forum. The lie is already halfway around the world, while content moderators are dragging their feet because the video is generating a lot of clicks, and clicks mean cash. They have a financial incentive not to remove the video or at least take their time removing it.
So, what can be done? - Probably Fuck All.
You’ll be hearing a lot over the next few days from the usual talking heads who operate in the space that intersects with technology and politics. They will all tell you variations of the same solutions.
Social media platforms must invest far more in pre-emptive AI detection. (Which failed spectacularly last night, it was only when election monitors discovered the video and flagged it with META that they removed the videos. Google only removed the videos this morning.)
Social media companies have a civic responsibility to society. (Social media companies that behave with civic responsibility? Good luck with that or calling for that; they’re a law unto themselves, and all your shouting into the void won’t change that. In fact, the second you mention social media companies negatively, you’ll find your metrics going through the floor because you’ve criticised them.
Transparency about the origin of content is also crucial. (Ya, it’s crucial, so crucial everyone ignores it. Social media’s business model is essentially based on copying someone else’s content and passing it off as your own, and hoping it goes viral or at least gets some laughs on your incel WhatsApp group.)
We need a concerted public education effort on how to spot AI and these Deepfakes. (Mainstream media is becoming less and less trusted because of perceived biases towards candidates or issues. How can you run a successful media campaign on trust in AI or the media if half the population doesn’t trust you anyway?)
My personal favourite - Our regulators and lawmakers must catch up with technology! This will be a long one…………..
The problem with the law and regulators, when they try to lay their hands on the source of the deepfake, is that the law and regulators believe in borders. It’s a quaint, touching notion, the idea that a sovereign power ends at some imaginary line scratched into the dirt or a coastline. But the deepfake is a citizen of nowhere and everywhere at once. It flickers into being in a server farm that might be in Russia, Iran, or a repurposed shipping container in West Cork, authored by a ghost in one jurisdiction, aimed at an audience in another, a chain of culpability that stretches across the map and then, snickering, blinks and is gone.
Irish Authorities will find themselves trying to serve an Interpol warrant on a bot farm, to extradite a pseudonym, to prosecute an act of malice that is, legally speaking, a perfect nowhere. It’s like trying to pin a crime on a rumour, or arrest a reflection. With a decent VPN, Deepfakes can be created in one country, hosted in another, and consumed globally, conditions that make territorial law enforcement almost meaningless. A malicious political deepfake uploaded onto a foreign server will circulate across social media long before regulators identify its geo-location, and by that stage, for the price of a coffee and a sandwich, you’re probably on the nearest Ryanair flight out of the country with the device that produced the deepfake video. This dispersal of responsibility leads to what scholars term the “jurisdictional black hole”: no single state can assert full authority over the creation, hosting, and distribution processes involved in digital manipulation.
And so the world, in its infinite wisdom, has decided to build its digital Babel not with one set of plans, but with a thousand. In Brussels, a new cathedral of regulation rises: the Digital Services Act. It’s a straw house that pretends it is all solemn arches and heavy buttresses, designed to enclose the digital square and force the platforms to kneel in penance at its altar. It speaks in the language of accountability, of transparency—a hopeful, earnest murmur against the obvious oncoming storm. They never act. They won’t, even as modern liberal democracies crumble via the weaponisation of social media. Under U.S. law, they maintain broad free speech protections under the First Amendment. It is the sacred right to shout into the void, even if your shout is someone else’s face, screaming. This divergence allows bad actors to exploit regulatory gaps - hosting deepfakes in lenient jurisdictions or routing data through offshore intermediaries.
The cruellest joke of all is that even those who profess to be allies, who share data and handshakes and military secrets, cannot agree on what a monster looks like. What is, in one capital, a harmful piece of electoral sabotage, is, in another, a protected piece of rough political theatre. It’s the oldest game: when the guards at one gate are vigilant, you simply go to the gate where the guard is asleep, drunk, or doesn’t believe in the concept of gates.
So lawmakers and regulators won’t be catching up with technology anytime soon. And even if they did, they’d be like the dog that eventually caught the car it was chasing.
The picture above of the Sinn Féin leadership is obviously a deepfake image. Before you pearl-clutchers start calling for my beheading.
False Gods
There are a certain kind of people who believe in equality, provided they get to tell you what equality is. They arrive at protests and events in chauffeured cars and leave them, owning the barricades. They tell us that they are progressives, the allies of the oppressed, the friends of the dispossessed—then hand the eviction orders to the bank, their signatures gleaming like a sneer in biro. The left in our time has turned into a kind of aesthetic, a taste: the colour red worn like an expensive perfume; revolution as a lifestyle choice for people who don’t know what modern hunger is like. They speak in the old European tongues of Marx and Engels, the liturgies of solidarity and justice, but their mouths are full of ash and coins. Watch them as they move through the ruined, the dispossessed, the fragile: they pose for photographs with their lessers, their faces arranged in masks of profound concern, while in their pockets, their fingers are crossed, clutching the cold keys to the eviction notice and their hefty salaries. This is the true face of the politics of feeling: not a revolution, but a pathetic pantomime where the ghosts of dead idealists are summoned only to be sold for the scraps of political connivance. A theology of betrayal so refined it passes for piety.