Deepfakes: the next generation of counterfeits

Making up facts, bending the truth – this has been happening since the internet began. Whether it’s manipulated images, alleged news, or made up information, the world wide web is full of it, making it increasingly difficult to distinguish between reality and fiction. Now counterfeits have reached a new level: deepfakes.

Deepfakes first appeared in December 2017 on Reddit. One user had managed to put the faces of several celebrities into porn movies in such a way that made it look shockingly real. For a short while, creating celebrity porn became the thing to do, but soon fake videos like these were banned on Reddit as well as other platforms such as Twitter and Discord. It didn’t stop them from spreading elsewhere though. But the question is: What makes deepfakes so special?

Deepfakes – what are they?

Usually these fakes require a lot of work and expertise. They aren’t necessarily always created for shady reasons – it’s not uncommon for Hollywood movies to exchange faces, for example. Experts in the areas of cutting technology and CGI are normally hired for these tasks. Deepfakes, however, are created by the computer itself without any adjustments being made by hand.

The word “deepfakes” is a portmanteau of “deep learning” and “fake.” Deep learning is one form of machine learning. Algorithms are required for exchanging faces or objects with deepfakes. For deep learning to work, the algorithms are fed with a very, very large amount of image or video data. The more material you have of a person, the better the result should be.

Tip

Videos are particularly useful material. You have quick access to thousands of individual frames of the person looking in different directions. Videos also show faces in more natural positions than normal photos, which often only show a smiley face from the front.

Around 300 images with the chosen person’s face (at best from all possible perspectives) should be enough to get a decent result. The deepfakes code contains a neural network, a so-called autoencoder: the network is trained to compress data in order to decompress it again. During decompression, the autoencoder tries to achieve a result that is as close as possible to the original. To achieve this, the networks learns to distinguish between important and unimportant data during the compression process.

By feeding the algorithm with images of dogs, the artificial neural network learns to focus only on the dog and ignore anything in the background (noise). The autoencoder can then create its own dog from the data. This is also how face swaps work with deepfakes: the neural network learns what the person’s face looks like and can then create it independently – even if the face and mouth are moving at the same time, for example.

To effectively swap faces, two heads need to be recognized: the face that appears in the original material and the one that you want to exchange it with. So, one input (the encoder) and two outputs (the decoders) are used. The encoder analyzes all the material while the two decoders each generate a different output: face A or face B.

In the end, it works in such a way that the algorithm doesn’t insert face A into the video, but face B instead, even though it doesn’t belong there at all. This is the difference between other fakes where the face is cut out of an image, retouched, or adjusted, and inserted into another image. Regarding deepfakes, however, the image material isn’t copied into another image: totally new material is created. This is the only way to match the facial expressions of the original face.

This explains why some errors occur with deepfakes: the neural networks reach their limit when it comes to unusual facial movements. If there isn’t enough material from the relevant perspective, the frame will appear blurry. The algorithm tries to generate an image from the little source material it has, but will unfortunately leave it lacking in detail.

The history of deepfakes: from Reddit into the world

Deepfakes first originated on Reddit. The website is known for also housing curious topics in its sub forums, which are known as subreddits. A Redditor with the name “deepfakes” created a subreddit in December 2017 and published pornographic videos featuring celebrities. To do this, the anonymous user had built the aforementioned algorithm, which is based on other technologies such as the open source libraries, Keras and Google’s TensorFlow.

Within a very short time, the subreddit had over 15,000 followers. Reddit quickly put a stop to the forum and, like other companies (including the pornographic video platform, Pornhub), banned the distribution of fake porn. But that wasn’t enough to stop deepfakes, since the code used is open source and is available to everyone. On GitHub, you can find several repositories where developers work on the algorithms. This is how a deepfakes app, called FakeApp, was created.

The program enables even those with little computer knowledge to perform face swaps. To create deepfakes via app, you only need a powerful graphic card from Nvidia. The program uses the graphics processor (GPU) for the calculations. Apart from FakeApp, deepfakes can also be created with a computer’s CPU, but this usually takes much longer.

In the meantime, the network community has found further uses (other than pornography) for face swaps based on machine learning. As you know from the internet, this technology is used to a very large extent to create funny nonsense. It is often used to put actors in films in which they have never appeared. In a short clip from the film adaptation “Lord of the Rings,” for example, users replaced every actor with Nicholas Cage, and Sharon Stone was replaced by Steve Buscemi in her notorious scene from “Basic Instinct.”

Consequences on society

Jokes like those mentioned above are harmless, but the new possibilities of video manipulation pose several challenges to society. Firstly, there is the question of legality. The women appearing in these porn videos have never given their consent. Apart from the fact that this is more than questionable from a moral point of view, these videos could also wreck the person’s reputation.

Fact

Deepfakes are currently mainly created using celebrity faces. One reason for this is that it’s relatively easy to find lots of celebrities’ images on the internet. In the meantime, non-celebrities are also posting more and more photos of themselves online, which put them at risk of becoming victims of deepfakes.

Apart from causing distress to individuals, deepfakes can also cause social changes. In recent years, the problems caused by so-called fake news have already started to become apparent. It is becoming increasingly difficult to distinguish between real facts and false claims. Up to now, video evidence was considered a reliable indication of whether a statement was correct or not, but since deepfakes came on the scene, this is no longer the case. Deceptively real manipulations can now be created with relatively little effort – and not only for the purpose of entertainment.

Counterfeiting is and always has been an important propaganda tool. Deepfakes can be used to influence politics slightly. While a video of Trump’s face covering German chancellor Angela Merkel’s face is obviously fake, other politicians could be brought into situations in which they have never been. Since machine learning can even recreate a person’s voice in a relatively credible way, it’s scary to think what deepfakes will be able to do in the future. Fakes like these could inevitably influence election campaigns and international relations.

For our society, this means that we can’t really trust the media, especially the internet media. People are already skeptical when it comes to news, although there are others that believe every social media post they see even if it’s lacking any factual basis. In the future, we will no longer be able to believe what we’ve even seen with our own eyes!

But not all deepfakes developments are destructive and foolish: deep learning can revolutionize the creation of visual effects. At the moment, it requires a lot of work to stick the faces of actors onto the bodies of other people. For the Star Wars film “Rogue One,” the young princess Leia was created with visual effects since the actress, Carrie Fisher, was already 60 years old when the movie was released. Saying this, an internet user achieved a similar result with the help of deepfakes – according to him, it took half an hour on a normal PC. Deepfakes have the power to make visual effects in entertainment media faster and cheaper.

There has been speculation that deepfakes and the simplicity associated with the new types of fakes could give viewers more choice. For example, if you watch a movie in the future, it would be interesting to be able to select which star should play the main character. All that’s needed is a quick click before the movie begins. Something similar could also happen in the advertising industry. Soon, celebrities won’t need to fly to photoshoots to wear the latest designer clothes or pose for the newest perfume, they will just sell a license to allow others to use their face.

Summary

Machine learning offers many opportunities for our society’s future. Google is already working with artificial neural networks and deep learning when categorizing images or developing self-driving cars, for example. Deepfakes also highlights one of the possible downsides of technology, since the developments can also be used destructively. It is up to society to find solutions to problems like these and to take advantage of the useful opportunities offered by machine learning and deepfakes.

Was this article helpful?
We use cookies on our website to provide you with the best possible user experience. By continuing to use our website or services, you agree to their use. More Information.
Page top