es flag
en flag
fr flag
de flag
pt flag

The world of audiovisual production is constantly changing, and Digital Face Transplants have existed within it for years, while the Deepfakes just arrived two years ago. Both processes produce the same result, the possibility of supplanting one face for another, in audiovisual material... So what has inspired the film and internet community to pay attention to these techniques all of a sudden? If Digital Face Transplants exist within cinema since the 1990s is the arrival of the Deepfakes, an experimental technique for the same purpose, really something to worry about?

First, context...

Within the concept

of the edition absolutely nothing is new.

The need to make reality what we want to observe exists within

human beings since it is possible to hold a brush, and has only

grown with the existence of technology.

He started painting, then photography, and he just followed.

adapting to reach the most powerful medium at the time, depending on the time.

With the existence of television and the Internet, it is the audiovisual media that have the greatest prominence today. A message that is presented through a video, for example, will have a greater audience than the one presented in writing or

even by mouth (say, a medium like radio) simply because it is the

combination of the senses that we rely most on when consuming


For many years, it was argued that an image

with him more truthfulness than a thousand words, but, with the existence of

tools like Photoshop, between

others similar, static images simply lost impact range.

The video took its place, because, if

we saw someone move on a screen, and we could also hear them talk,

That was irrefutable proof of truth.

Now, is it really possible that this will change

thanks to audiovisual techniques such as Digital Face Transplants and Deepfakes? Why and what is the difference?

What is a

Digital Face Transplant?

The practice of Digital Face Transplantation is

is in force in 21st century cinema and consists of implementing any

face that is desired, on the body of a person to whom it does not belong. It is a

extensive and complicated process, which usually requires more than one study. Combine

Motion Capture techniques and digital animation, looking to replicate

textures, expressions and movements of the original face, so that they can

mingle with those of the person who appears on the screen, creating an illusion

semi-realistic that it is a single one.

This practice has been implemented in several

known cinematographic works, as they are; Game of Twins (1998) The Sopranos (1999) Gladiator (2000) and, more

recently, Star Wars: Rogue One (2016).

Despite being quite common in the industry, these processes are accessible

only for major film studios, as they require a group of

work of many people, as well as extremely specific technology and equipment. Taking this into account, I had never been a

prominent concern in the mind of the average person that anyone could

replicate his face on video and possibly attribute to him actions in which

has never taken part and words he has never spoken.

However, in 2017 year began to emerge in the network

social Reddit content that showed

faces of recognized public figures implanted in the bodies of actors inside

content of adult content, emulating quite convincingly the results

that can be obtained through the Transplantation

Digital Face. That is, overlay the face of a person on the body of

another one.

This type of video was named Deepfakes, the same name of the user

that popularized them on this network.

Although the videos have been deleted from the

page since then due to its inappropriate content, a week later

appeared on the web a new free application called “FakeApp”, recognized

as the software responsible for this kind of video. Despite the fact that the creator

unsubscribed the original page, to date, the application is relatively

easy to purchase through third parties and is available for all


How do you create a Deepfake? :

The software responsible for the Deepfakes works using artificial intelligence. It is a simple algorithm, which assigns its users the sole task of feeding it with information (in this case, it is necessary to provide the application with two types of key content; the base video and the video containing the face that we want to superimpose on the original content) which the program will process and divide by frames.

It's important.

denote that, to ensure the quality of the product, it will be necessary to have

an extensive base of photographs of the face in question, because, the more

angles and expressions are provided to the algorithm, more optimal and realistic

will be your job. Similarly, the quality and length of the videos influence the

time it may take the application to process them. Usually, if you

is about a long video and seeks to achieve some clarity in the content, the

wait will be higher, however, it is possible to process videos of any length,

sacrificing quality if you want to be done in less time.

After the

material has been selected, the software starts a phase called “training” during which it compares

frames extracted from both videos

and begins to merge them, based on the face structure and similarities

between them. There is no specific time period for this process, and the

creator of the Deepfake in question can

watch it through a preview window and decide when

considers it appropriate to detain him.

Once the

result is satisfactory enough, the database leaves the

user with merging frames created and checked out during the

training, which must be exported as a sequence of images to

any editing program and

rendered as a video.

Despite the fact that this

is the most recommended option for publishers who wish to have a higher

control over the process and care for details, “FakeApp” has a

alternative for newbies, one of the archive folders included in the

moment of its download, called “convert to MP4” through which the software is responsible for rendering the

content, without the help of the creator of the Deepfake.

What is the

Difference a Deepfake from a Digital Face Transplant?

By differentiating a Digital Face Transplantation from a Deepfake, you

can appeal to three key elements:

Teams: Transplant

Digital face quality, implies a combination of techniques and studies

prior to production. These include studying and scanning the

face structure of an actor and its double body, evaluating the reaction of

both sides to the lighting from

different angles. This is usually done in special spaces, where the

studio lights can provide the highest resolution possible.

Subsequently, another scanning process must be performed on a machine that

is named Medusa Ring or “Medusa Ring” which focuses on

face expressions and provides animators with digital models for

manipulate in post-production

This equipment is extremely inaccessible

for the average creator, since it can only be found in studies

from Los Angeles.

When all these processes have been completed,

the double body must act on the set with the wardrobe and also the team that

is used for the Motion Capture process.

It will then be the work of the animators to emulate the expressions of the double

, with the digital model of the scanned face of the main actor, for

thus starting the fusion process leading to the final product.

Time: Between pre and post

production, it is estimated that performing a work of this magnitude can lead, as

minimum, six months.

Budget: In addition,

none of the studies known to have had experiences with this

technical, has, to date, provided the media with information about the

cost of all these processes. However, it is generally agreed that it is

of one of the most costly investments that can be made in a production.

The creation of a Deepfake, on the other hand, should only have three elements: A

software such as “FakeApp” (since after its appearance a

lot of variations and the like), a computer with a good card

, hardware capable of processing heavy data, and a basic knowledge of

computer language and editing.

A good Deepfake

can be achieved in periods of time ranging from 72 hours to two

weeks, depending on what content the creator has available.

Most of all, the existence of the Deepfakes represents an opportunity

for both sides of the spectrum: Allows large producers to reduce their costs

and novice editors experimenting with a technique that was not within their reach.

without the need for a lush budget or long hours of work, plus

of minimizing the amount of equipment required and used during a

production, whether large or small.

Future projections

and how we can discredit the “cons”

of the Deepfakes

Currently, the

“FakeApp” application and all new software

of this kind, which soon appeared after the popularity of his

predecessor, is free and available to the general public. After the Deepfakes gained popularity this year 2019 (despite the fact that the

algorithm was released to the web already two years ago) there have been certain concerns

by the media, which are, in part, fueled by this factor

of availability.

Again, we are

talking about simplifying an existing technique (to reiterate, the

action of impersonating someone within audiovisual material has been present

since the 1990s) more, however, it had only been within the reach of the great

and powerful studies, so far. Media arguments are based on

that, if possible to absolutely

anyone perform such content, then there will be no way

to regulate users according to their intentions and, therefore, is

completely possible that it will be used for erroneous purposes (the most disturbing

to date is political defamation, for example) and that may cause

unnecessary scandals.

It is correct that

this type of technology is in constant optimization, and, despite the fact that

current results are experimental (depending on the quality of the material

provided, post production once rendered, etc.) the most

is likely that, with the worldwide popularity of the software, the algorithm will be able to create, with

the passage of time, a final product closer and closer to a consistent human.

until the difference between an unedited video and a Deepfake is extremely difficult to detect.

This possibility has

has been used by the media to demonize the tool and associate it with the

dissemination of erroneous information (because,

if we can't be sure of what we see and hear, then to what

half will we be able to believe him?) Some might argue that the

concerns of that style are valid. However, as mentioned

above in the article, the concept of editing reality is hardly

new (with one of the first examples

taking place in the 1930s during the political purges of

Russia) and especially in the modern digital age, humans usually

be relatively critical of the media we consume.

So, it is

highly likely that, just as we learned to accept the existence and

identify the characteristics of the manipulated photographs, let's be able to

to do the same with any piece of audiovisual material produced

in the future. The aversion to the existence of this tool under the premise

that it could distort how we perceive reality, speaks more of doubt

that we have put into our ability to critically observe the media

and impartial, that of what we have shown to be able to recognize and prosecute


Deepfakes are not the product of a

tool of disinformation, but one that will allow us to expand the

horizons of the worlds that we translate on our screens, without limiting the

based on your location or the resources you have.

Pin It on Pinterest

Share This