Surely you have seen or heard about the videos deepfake, as its popularity and quality is increasing. It is a technology based on Artificial Intelligence (AI) to create false images or videos, but that look totally real. It consists of superimposing the face of one person on that of another, to simulate a situation that never happened, which can be both useful and fun, as well as dangerous. Here we explain what they are, how they are made and what you should know so that you are not deceived.
The concept ‘deepfake‘ comes from the terms deep learning (deep learning), a branch of AI, and ‘fake‘ (fake or forgery), explains the cybersecurity company avast. It is a technique of manipulating images, video and audio using Artificial Intelligence software, making the altered material look authentic.
The calls deep video portraits or face swaps, are the most common form of deepfake, but there are other types. The best known is the deep face, which consists of creating totally fictitious photos and using them to generate animations or videos. The other is the deep voice, which is to falsify the voice of a person to supplant it in an audio. The two techniques are often used together to produce highly realistic and convincing, but completely artificial, audiovisual material.
This technology took its first steps in the late 1990s. The big breakthrough happened in 2014, when computer scientist Ian Goodfellowauthor of the book deep learningdeveloped the generative adversarial networks (GANs) for the creation of deepfakes.
Its popularity exploded from 2017. A user of Reddit inserted the faces of actresses like Emma Watson Y Natalie Portman, in adult movie scenes. False erotic material was also spread from Gal Gadot, Taylor Swift, Scarlett Johansson and other international celebrities.
How do you create a deepfake?
‘Deep fakes’ can be generated directly by specialized software or computers, with virtually no human intervention required. This technology is based on machine learning (machine learning) and algorithms deep learningto extract part of a video (such as a person’s face), and insert and adapt it to another, falsifying their gestures and providing the greatest possible realism.
Although it sounds complicated in technical terms, making a video of this type can be quite simple with the two main technologies developed for it.
Deepfake programs
It is software specialized in artificial intelligence and audiovisual editing. First, an AI encoder algorithm runs thousands of shots of the faces to be swapped, analyzes the similarities between the two, reduces them to shared features, and compresses each image.
Next, a second decoding algorithm recovers the face information in the compressed images. It is necessary to use a different decoder to process each one and, to exchange the faces, the images are fed into the opposite decoder. This one is in charge of reconstructing the face and expressions of a person, and placing it on that of another. The result is a video that looks real.
Generative Adversarial Networks (GANs)
another way to do deepfakes is using the so-called Generative Adversarial Networks (GAN), or generative adversarial networks, in Spanish. These work through artificial intelligence neural networks that process hundreds or thousands of images of a face or object. Thus, they learn the patterns they find in them and then reproduce them, creating new images of it.
The GANs confront two AI algorithms. The first, known as the ‘generator’, creates the fake image, and the second, called the ‘discriminator’, adds it to a sequence of real images. It is necessary to repeat the process several times so that the performance of the algorithms improves and they manage to produce a completely realistic video file.
In both cases, creating a deepfake it requires an immense amount of photos and/or videos of the faces that are intended to be exchanged. That is why most of them are of celebrities, politicians and famous businessmen, but it can be done with anyone, as long as enough images can be obtained, for example, from their social networks.
Until a couple of years ago, these techniques were only available to a few, as they require very powerful equipment. Also, make a good deepfake It is very expensive and only the big producers could bear the expense. However, there are now apps and online tools that can make ‘fake’ videos of acceptable quality. One of the most popular and controversial is Face Appbut there are others like face play Y refacewhich you can download at Android either iOS.
Positive Uses of ‘Deep Fakes’‘
It is important to highlight with what intention a deepfake, since its high level of plausibility and credibility has both beneficial and malicious potential. “Synthetically generated faces are not only photorealistic, but are almost indistinguishable from the real thing and are considered more reliable,” says a study published in the journal Proceedings of the National Academy of Sciences in February 2022.
By itself, this technology is harmless, and its proper use can have a positive impact in different areas. It is widely used in film, television and marketing to ‘bring to life’ celebrities who have passed away and include them in current productions.
For example, in the 2016 movie Rogue One: A Star Wars Story‘revived’ the actor peter cushin. They also supplanted the face of Carrie Fisher for that of his younger version as the Princess Leia.
In March 2022, we could see again the ‘Chavo del 8’ talking with the actor Eugenio Derbez in a commercial. The face and voice of the character, created by the deceased Roberto Gomez Bolanoswere reconstructed by deepfake to interact as if they were face to face.
In education, it can be used to enliven galleries and museums, or bring historical figures to life to enrich the learning experience. As an example, the Dali Museum in Florida, United States, digitally ‘brought to life’ the legendary artist Salvador Dalito greet and drink selfie with visitors.
In Medicine and health, helps restore the voices of people who have lost their speech due to illness. It also works to create virtual scanners based on data from real patients, in order to detect possible tumors or cancer. In addition, it serves to test simulated drugs on recreated organs, affected by hypothetical diseases.
Of course, it also has negative uses
Like any great technological advance (or superpower), many have found unethical or unethical ways to take advantage of the deepfake. Since anyone could create a fake file of another, it can be used to damage a person’s image or discredit himand as a means to cyber bullying or the bullying school.
According to an analysis of Crime Science Journalthe deepfakes with a criminal purpose are the crime based on artificial intelligence with the greatest power of damage or profit, and the most difficult to defeat.
Among the most common malicious uses are creation and spread of fake news and impersonating identity to commit frauds either scamsas well as fabricate ‘evidence’ in legal proceedings. It has also been used for fake kidnappings Y access security systemscircumventing biometric recognition by imitating voice or images.
One of the first and most notorious frauds through deep voice It was released in 2019, according to The Wall Street Journal. The CEO of a UK company transferred €220,000 to a purported supplier after cybercriminals mimicked the voice of his German boss with software to get him to give the order.
Big technology companies have implemented measures to detect them, remove them, or prevent them from being distributed for harmful purposes. In January 2020, Facebook banned the deepfakes, except those that were clearly parodies. A few months earlier he created a $10 million fund to develop tools to detect fake images.
In September of the same year, microsoft presented its ‘Video Authenticator’ to identify audiovisual counterfeits. also appeared sensitivitythe first visual threat intelligence company, combining monitoring and algorithmic detection of deepfakes.
How to recognize a deepfake?
As this technique advances and is perfected, it is more difficult to recognize what is authentic and what is not, since the results are more and more realistic. Many times only a digital imaging specialist or software can detect them. But there are elements that give them away:
- Subtle flaws and details. There are imperfections that have not been corrected, such as fuzzy edges, extremely smooth skin that looks artificial, the exact position of the head, and jerky or unnatural movements. It is advisable to play the video at low speed, to detect differences in facial expressions, lighting, background or sudden changes in the image.
- Face and neck. The deepfakes they usually focus on the face, since replacing a full body is much more complicated. See if the person’s body matches their complexion, skin color, and any particular markings, like moles or tattoos.
- The blink. In a fake video, the person blinks much less than someone real. It is estimated that humans blink every 2 to 8 seconds, and these last 1 to 4 tenths of a second. Algorithms are still unable to replicate this speed.
- The inside of the mouth. Deep Learning algorithms are still not capable of exactly copying the teeth, tongue and oral cavity, leaving a certain blur in the area.
- Sound. Although combined with deep voice, usually the audio does not fit the picture. You can perceive a bad synchronization between the movement of the lips and what the character is supposed to say.
- They are short clips. The vast majority of deepfakes they last only a few seconds, as the processes to create them are labor-intensive and expensive.
In addition, common sense must be used: is the source that shared the video reliable? In what context, medium or social network was it published? Think about it.

Mairem Del Rio Addicted to watching series and movies, doing (a little) exercise and changing my hair color. I am also a journalist, with more than 16 years of experience and dedicated 100% to digital media since 2011. I have been from reporter and community manager, to editor in various media and agencies. My areas of expertise are as diverse as they are contrasting: entertainment, travel, lifestyle, health, business and finance. Now I am focused on the entrepreneurial ecosystem, cryptocurrencies, NFTs, metaverses and the promising cannabis industry in Mexico.