Skip to main content

Talk to us: 0333 004 4488 | hello@brabners.com

Political deepfakes — navigating the new era of disinformation and online harassment

AuthorsAdam Murphy

7 min read

Litigation & Disputes

Deepfake

The rapid evolution of deepfake technology — capable of creating content that’s almost indistinguishable from ‘real’ footage — is posing significant challenges for citizens and democracies around the world.

Since deepfakes can be deployed maliciously to subtly alter speech, they’re able to create celebrity scandal, falsely implicate someone in a criminal context or misleadingly amend political messaging. 

This topic is receiving increased attention at present, following last week’s Paris AI summit. To help publicise the summit, French President Emmanual Macron shared a montage of viral deepfakes that featured him inserted into music videos, TV shows and influencer-created content, stating that deepfakes “can be a form of harassment”.

Here, our expert Adam Murphy explores what can be done to legally combat those using deepfake technology and discuss a groundbreaking case in which we acted for Cheryl Bennett in securing the UK’s first settlement for a dispute concerning a political deepfake.

 

What is a deepfake?

Deepfakes are fake videos, images and audio recordings that have been digitally altered to create realistic but ultimately fictitious content. 

They’re created using technology that has the ability to use pre-existing content to generate fake media. Usually, this ‘new’ material will show false instances of people doing or saying things that they’ve never done or said and/or being in situations that have never actually happened. 

The vast majority of detected deepfakes have featured individuals in the entertainment, fashion or sports sectors. Increasingly, major industry players such as Disney are actively developing their own variants capable of de-aging or ‘resurrecting’ actors.

 

Shifting legislative landscape

We’re all familiar with outlandish or comic AI-generated videos available online that begin relatively normally before descending into ridiculous situations. The viewer rapidly recognises that movie-quality technology is in play and the content is really for entertainment purposes. However, so rapid has been the advancement of this technology that there’s now an ability to create altered content that’s almost indistinguishable from reality.

Recent developments in deepfake technology are making deepfakes increasingly ubiquitous and the role of social media companies in facilitating the dissemination of such videos is brought into increasingly sharper focus. 

As deepfake technology is becoming more sophisticated, the legislative landscape is shifting. For example, last year’s legislative reforms contained in the Online Safety Act criminalised the sharing of ‘deepfake’ intimate images for the first time. Similarly, the Government has recently announced that it’s looking to introduce further changes to target AI-generated child abuse images too. 

Governments and legal institutions around the world are being challenged to consider updating law to encompass the creation, distribution and use of deepfakes. 

 

Deepfakes in a political context

Deepfakes are also being used to target politicians, with videos of political leaders saying and doing things that contradict their known views. One such video was created of the Ukrainian president in the early stages of the conflict with Russia, attempting to depict him as surrendering to Putin.

Other reported examples in the political context include cases in Slovakia, Bangladesh and Argentina. In Slovakia’s case, faked audio recordings were released online days before election voting that appeared to show the leader of the Progressive Slovakia party discussing how to rig the election.

In the UK, miscellaneous deepfakes have purported to show Keir Starmer swearing at staff, Sadiq Khan cancelling Armistice day due to a pro-Palestinian march and BBC News supposedly examining Rishi Sunak’s finances. 

While these episodes were flagged as involving deepfakes at an early stage, will this always be the case? And is every viewer capable of recognising them as being the work of AI?

 

Cheryl Bennett — teacher caught up in a deepfake web

We recently acted for Cheryl Bennett in securing the UK’s first settlement for a dispute concerning a political deepfake.

In May 2024, Ms Bennett — a 27-year-old teacher from Wednesbury, West Midlands — was the victim of a manipulated video that falsely portrayed her as making a racial slur.

Ms Bennett had agreed to help out a colleague, Mr Mughal, who was standing as a local councillor for the Labour Party in local elections. The pair were going door to door, handing out leaflets, when Ms Bennett was recorded on a house CCTV camera.

Ms Bennett was filmed approaching a homeowner, who told her that he had already voted. She responded by asking him if he voted for the Labour Party candidate and he informs her that he voted for an independent candidate, a man called Akhmed Yakoob. However, at some stage, the ‘original’ footage was manipulated. Audio of Mr Mughal was removed and replaced with audio in which Ms Bennett is heard verbally abusing the homeowner by supposedly using a racial slur as she walks away from the house. The video used her voice but not her actual spoken words. Subtitles were also added to the video to mislead further.

This doctored video was subsequently shared on social media by Mr Yakoob — the local politician, lawyer and TikTok influencer — via his TikTok account. The video quickly went viral and inevitably led to a torrent of abuse and threats being directed toward Ms Bennett. Ms Bennett's employer school received 800 complaints, which considerably impacted her both personally and professionally. The identity of the individual who created the deepfake was never established.   

We were able to secure a legal settlement in which Mr Yakoob agreed to pay substantial damages and costs to Ms Bennett and provide undertakings for sharing the doctored video. The doctored video falsely portrayed Ms Bennett as someone who is racist, causing significant damage to her personal and professional reputation. 

 

Deepfakes — key concerns and risks

Deepfakes can be used to create powerful and dangerously misleading (or simply false) narratives for political purposes. In a nutshell, they risk having a negative impact on the democratic process and the electoral prospects of politicians and parties.

In a political context, deepfakes may undermine voters’ ability to identify, understand or access the truth or relevant facts. This could prove very harmful where bad actors want to undermine trust in certain politicians or political parties. The public discourse can be distorted and false information disseminated to a wide audience. For the increasing number of people who get their news solely through social media rather than traditional forms of media such as newspapers or established news agencies and sources, the potential for damage is inevitably increased. 

Photographs, sound recordings and films that were previously trusted to be used as foundational ‘evidence’ of facts now need to be verified and checked to ensure that they’re authentic and haven’t been manipulated or digitally created.

 

Election risks

Accordingly, the (potentially overlapping) election risks include:

 

Viral spread

False information can quickly reach a wide audience online. Social media platforms facilitate the viral spread of content and encourage high user engagement. In today’s digital age, the courts look at the data around online dissemination — such as impressions and engagement — in considering the overall impact of relevant media.

Of course, in some cases (like the Cheryl Bennett case) the identification of the original creator of a deepfake remains challenging. The anonymity provided by the internet and social media (such as anonymous and ‘bot’ accounts, as well as the ability for users to screenshot and screen-record content) can make it more difficult for digital forensics to trace the source of faked content.  


Talk to us

New technology has the potential to impact a whole host of situations and scenarios, ranging from copyright infringement of musicians’ works through to reputational issues for celebrities (and, of course, political misinformation and scandal). Cheryl Bennett’s case is unlikely to be the last legal settlement in the UK touching on these themes.

If you need assistance with false information, defamation or reputation management, our award-winning litigation lawyers are here to advise you. 

We have extensive experience in supporting business men and women, athletes, professionals and all manner of high-profile individuals and organisations.

Talk to us by giving us a call on 0333 004 4488sending us an email at hello@brabners.com or completing our contact form below.

Adam Murphy

Adam is a Solicitor in our litigation team.

Read more
Adam Murphy

Talk to us

Loading form...

Related insights