TWO so-called deepfake videos have emerged online appearing to show Boris Johnson and Jeremy Corbyn endorsing each other to be prime minister.
But what are deepfake clips and are they legal?
What are Deepfake videos?
Deepfake videos are made using artificial intelligence technology which can manipulate someone's face in a video to make it look like they are saying something that they didn't.
It's one level up from dubbing, or lip syncing and can appear very convincing.
In the fake video of Boris Johnson, he is seen saying: "My friends, I wish to rise above [divisions over Brexit] and endorse my worthy opponent, the Right Honourable Jeremy Corbyn."
In the Labour leader's clip, he appears to say: "I'm urging all Labour members and supporters to… back Boris Johnson to continue as our prime minster."
The clips were produced by London-based think tank and research lab Future Advocacy.
Other examples include a clip from 2017 which was doctored to show Facebook CEO Mark Zuckerberg discussing having stolen data.
Zuckerburg's voice, which has been replaced by an actor, says: "Imagine this for one second.
"One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures."
The videos are meant to show how technology can be used to manipulate data.
What have social media companies said about the videos?
When the Zuckerberg deepfake emerged in 2017, Instagram resisted calls to take the clip down.
Speaking at the time, head of Instagram, Adam Mosseri, 36, said removing the doctored clips would have been "inappropriate".
"I don't feel good about it," he told CBS.
He added that there's no hurry to delete the videos, in part because "the damage is done".
Mosseri said Instagram hasn't taken down the videos because the company hasn't yet come up with an official policy on AI-altered video.
"We don't have a policy against deepfakes currently," he told CBS.
"We are trying to evaluate if we wanted to do that and if so, how you would define deepfakes.
"If a million people see a video like that in the first 24 hours or the first 48 hours, the damage is done. So that conversation, though very important, currently, is moot."
Are the videos legal?
Currently the videos are not illegal. If they are a pornographic face-swap video or photo, the victim will be able to claim defamation or copyright.
But it as it stands, the deepfake videos of celebrities making controversial statements that they have never said remains legal… for now.
There's rising concern among experts that convincing deepfakes could be used to spread misinformation on social media.
Canada’s cybersecurity agency, The Communications Securities Establishment, recently warned that deepfakes pose a threat to modern democracy.
"Improvements in artificial intelligence (AI) are likely to enable interference activity to become increasingly powerful, precise and cost-effective," it wrote in a report on cyber threats.
Deepfakes – what are they, and how do they work?
Here's what you need to know…
- Deepfakes use artificial intelligence and machine learning to produce face-swapped videos with barely any effort
- They can be used to create realistic videos that make celebrities appear as though they're saying something they didn't
- Deepfakes have also been used by sickos to make fake porn videos that feature the faces of celebrities or ex-lovers
- To create the videos, users first track down an XXX clip featuring a porn star that looks like an actress
- They then feed an app with hundreds – and sometimes thousands – of photos of the victim's face
- A machine learning algorithm swaps out the faces frame-by-frame until it spits out a realistic, but fake, video
- To help other users create these videos, pervs upload "facesets", which are huge computer folders filled with a celebrity's face that can be easily fed through the "deepfakes" app
- Simon Miles, of intellectual property specialists Edwin Coe, told The Sun that the fake sex tapes could be considered an "unlawful intrusion" into the privacy of a celeb
- He also added that celebrities could request that the content be taken down, but warned: "The difficulty is that damage has already been done
"Evolving technology underpinned by AI, such as deepfakes, will almost certainly allow threat actors to become more agile and effective."
Last month, Facebook refused to remove a deepfake video of US politician Nancy Pelosi even after it was viewed millions of times.
Footage of the House speaker was doctored to slow down her speech and make her appear drunk.
One of many versions on Facebook was viewed over 1.4million times, and shared 30,000 times, before it was taken down. Facebook denies removing it.
Branding the clip a "cheap fake," the House Intelligence Committee chair reportedly said Congress will investigate deepfakes ahead of the 2020 election.
Source: Read Full Article