What is a deepfake? Do you still remember Barack Obama’s words: “President Trump is a total and complete dipshit!”? Quite provocative, one is not used to such statements from the ex-US president. But did he really say that? Of course he didn’t. This video is a so-called deepfake and was created by Jordan Peele to show how dangerous such a fake can be. But let’s delve a little deeper into the matter.
What exactly is a deepfake, and what purpose does it serve?
Deepfake is a neologism made up of “deep learning” and “fake”. It describes a method of manipulating images, videos or audio formats (with the help of artificial intelligence) in such a way that the human eye or ear can hardly perceive the fakes. But what exactly is the purpose of a deepfake, and how exactly is it generated in the first place?
To create a deepfake, so-called neural networks are used. These networks act similarly to the human brain and, given a high data set, can predict what other data of the same type might look like. Therefore, if you feed these networks with enough images, videos and audio content, they get better and better and create higher quality manipulations.
One highly effective neural network is the GAN. It was first mentioned in a scientific paper by Ian Goodfellow in 2014. Over the years, various researchers continued to expand these networks and combine them with each other. As a result, the forgeries became higher quality and more credible. But first, let’s define what a GAN is.
A GAN – short for Generative Adversarial Networks – is a network consisting of two algorithms. One algorithm forges an image (forger) while the other algorithm tries to detect the forgery (investigator). If the investigator succeeds in identifying the forgery, the forger learns from it and constantly improves. This process is also called deep learning.
What types of deepfakes are there?
The first and probably most widespread type is the exchange of faces in pictures or videos, so-called face-swapping. Here, the heads of famous people are usually taken and placed in a different context.
A similar method is voice swapping. As the name suggests, voices or general audio content are manipulated to sound like a specific person. This method can be further developed with the manipulation of the facial expressions so that the words spoken match the movement of the lips and facial movements.
Finally, there is body puppetry. Here, body movements are analysed and can even be imitated in real-time.
Why are deepfakes so dangerous?
When the technology began in 2014, it was steadily expanded and improved. By 2017, the technology had reached the point where the first videos could be produced. This led internet users to exploit deepfakes for the manipulation of pornographic content, which was first made available on the internet platform Reddit. These videos consisted of celebrities depicted in compromising poses. According to a study by Sensity (then known as Deeptrace), 96% of all deepfake videos in 2019 were pornographic and exclusively concerned women.
“The development of full artificial intelligence could spell the end of the human race.”
– Stephen Hawking
With time and continuous further development of the deep learning process, more and more YouTube channels were created to deceive. Fakes of politicians, actors and other public figures began to see the light of day. From 2018 to 2020, the number of fake videos doubled every six months, reaching more than 85,000 in December 2020.
Hao Li, a deepfake expert, has warned that we will soon no longer be able to identify deepfakes as fakes. The problem, however, is not the technology itself but the lack of means in recognising these fakes. “Deepfakes will be perfect in two to three years,” Li said.
The truth in this statement is revealed in a programming competition initiated by Facebook AI in 2019. The group developed a dataset of 124,000 videos, 8 face-modification algorithms and associated research papers. But even the best competitors only achieved a detection rate of just over 65%.
“This outcome reinforces the importance of learning to generalize to unforeseen examples when addressing the challenges of deepfake detection.”, a Facebook AI spokesperson explained.
Example of the misuse of deepfakes
The extent of the damage that deepfakes can cause can be illustrated, among other examples, by a case in Gabon in 2018. President Ali Bongo, who had not been in the public eye for a very long time and was thought by some to be dead, published a video of a speech. Political opponents dubbed the video a deepfake, triggering an attempted coup by the military.
TW: Violence against children/youth
Another frightening case is told by X Gonzáles, a strong advocate for tougher gun laws in the US. Gonzáles is a survivor of the Parkland school massacre and gained international recognition for her emotional speech at a memorial service following the event. Opponents of further gun legislation defamed Gonzáles in a video, depicting her tearing up the American Constitution. In the original video, she tears up a target.
A video that was produced about the US Democrat, Nancy Pelosi, demonstrates the ability of voice swapping. Trump’s supporters, and thus competitors of the Speaker of the House of Representatives, edited a video to make her appear drunk and somewhat confused. The fake was clicked millions of times, despite the fact that Nancy Pelosi does not drink alcohol.
TW: Sexualised voilence
The next scandal concerned Rana Ayyub. The female Indian journalist made a comment about the nationalist BJP party, accusing it of defending child abusers. As a result, and as an attempt to undermine her credibility, a fake porn film was made of her by those critical of her actions.
What apps are available to create deepfakes?
DeepFaceLab: Probably the best-known open-source application is DeepFaceLab. According to the app’s developers, 95% of all deepfake videos are generated with DeepFaceLab. The app makes it possible to swap faces or entire heads, modify a person’s age or adjust the lip movements of strangers. DeepFaceLab is available for Windows and Linux.
Zao: Unlike DeepFaceLab, Zao is an app for smartphones. Originating from China, the extremely popular application creates deepfake videos in seconds and is geared towards entertainment purposes. So far, however, the app is only available in China (or for those with a Chinese phone number) on Android and iOS. Recently, the app has been criticised for its questionable privacy policy. Users relinquish all rights to their own images and videos when they use the app.
FaceApp: The application gained increasing popularity in 2019. It offers numerous functions such as rejuvenation or ageing, adding beards, make-up, tattoos, hairstyles or even the ability to change one’s gender. However, just like Zao, FaceApp has also been the subject of much criticism for its privacy policies. Here also, the rights to one’s own image and video are ceded. The app is available for Android and iOS.
Avatarify: Finally, we have Avatarify. With this application, users can create live deepfakes in video chats. The technology is able to recreate facial movements such as eye blinks and mouth movements in real time and thus achieves extremely realistic imitations. However, the basic requirements are not without difficulty. You need a powerful graphics card and other additional tools to be able to carry out the installation. It is available for Windows, Mac and Linux or in a slimmed-down form for iOS.
How do I recognise a deepfake?
Unmasking a deepfake is not always an easy task. First, always check the context of the video or image and consider whether the context makes sense. The FBI has also published a list in which it highlights characteristics of deepfakes. This list includes, but is not limited to:
- Visual indicators such as distortions, deformations or inconsistencies.
- Distinct eye spacing/placement of the eyes
- Noticeable head and body movements
- Synchronisation problems between facial and lip movements and associated sound
- Distinct visual distortions, usually in pupils and earlobes
- Indistinct or blurred backgrounds
- Visual artifacts in the image or video
But are deepfakes only negative?
Deepfakes are not exclusively negative. One positive impact they have can be witnessed in the film world. For example, Luke Skywalker was artificially deepfaked in the series Mandalorian. Disney is also planning more deepfake films using its Disney Megapixel deepfakes technology. In the future, it will be possible to make films with actors who have already died.
Progress is also being made in the area of e-training. The software company Synthesia has developed an AI that generates videos from text. The videos contain artificially created people who can reproduce the desired content. In Synthesia’s case, this technology is used to create e-learning courses, presentations, personalised videos or chatbots.
Another example of the innovative use of deepfake technology is demonstrated by a research team from Moscow. They have managed to breathe life into the Mona Lisa. You can marvel at the moving oil painting on YouTube.
Safeguarding against deepfakes: The Importance of cyber security awareness training for staff
In conclusion, as the prevalence of deepfake technology continues to grow, it is imperative for companies to take proactive measures to protect themselves against AI cyber dangers. Providing comprehensive cyber security awareness training for staff is crucial in equipping employees with the knowledge and skills needed to identify and mitigate the risks associated with deepfakes. By investing in such training initiatives, companies can strengthen their defenses and safeguard their operations against the evolving threats posed by malicious AI manipulation. It’s not just about protecting the organisation’s data and reputation; it’s also about empowering employees to become vigilant guardians of digital integrity in an era where trust and authenticity are increasingly under siege.