NEWS

Deepfake: The Nearly Ancient History Behind Deception

From an epoch-making threat to a mundane crime-&-prank tool — deepfakes seem to have devolved over the time. But they are still dangerous. And their history is longer than we thought.

Business secrets of the Pharaohs

People had been manipulating information long before the deepfake definition was coined. And almost always on political grounds. It all started in deep antiquity with a practice called damnatio memoriae, which means condemnation of memory in Latin. 

Pharaohs, Sumerian lugals, Hittite kings, Roman nobility — they all tried to erase memory of someone they didn’t like, claim or destroy their accomplishments and edit murals or manuscripts dedicated to their time and success or failures. 

For example, this fate befell the Eighteenth Dynasty heretic pharaoh Echnaton who led a rebellion against Egyptian polytheism promoting the cult of Aton. Hieroglyphs with his and Aton’s name underwent ancient ‘photoshopping’, so future generations would never know that someone tried to dismantle the religious monopoly of the wealthy priesthood factions.    

Fast-forward to the 20th century. Bolshevik honchos rise to power and collapse overnight, as tovarish Stalin grants them with and then withdraws his ‘highest trust’. That’s when Soviet photoshop came into existence: all party members executed in the NKVD’s dungeons were expunged from the official photos where they posed with Big Joseph and other important figures who were lucky to survive.

I never met Forrest Gump

Computers revolutionized this game. First, Photoshop’s debut in 1990 gave a key to producing cheapfakes and archaic memes. While the Internet was young and fresh, you could sometimes find an absurd headline like Hillary Clinton Adopts Alien Baby illustrated with a crappy, yet somehow hypnotic collage, which played the role of a legitimate evidence.

But photo editing has been around since 1846 — look up the Five Capuchin monks case. So you could barely surprise anyone with shoddy photo collages featuring aliens, bat boys and morlocks. But what if you could edit videos making people in the moving footage say or even do whatever you commanded them to?

Perhaps, the same idea had stuck with a research trio of Christoph Bregler, Michele Covell, and Malcolm Slaney. They came up with a technique to automatically label phonemes uttered by a speaker in any video and reorder mouth movements to match new phoneme sequences.

As an example of this dark wizardry, one of the J. F. Kennedy’s speeches was modified. The technology made his mouth synchronize with an odd phrase I never met Forrest Gump, which was probably a reference to a scene from the eponymous movie featuring JFK.

The peculiar tool, which looked like a bizarre toy, was named Video Rewrite. It was proudly presented in 1997 and the annotation mentioned it was the “first facial-animation system to automate all the labeling and assembly tasks required to resync existing footage to a new soundtrack”. Little did the authors know that it was the dawn of the Deepfake Era.

Welcome to the Internet Part II

The end of the 90s was a time for the web to evolve too. The world was on the cusp of entering the Web 2.0 era as was predicted in the Fragmented Future article. Basically, it meant that the digital realm was to become participatory

People would make their own content and upload it online, videos of any sort would be available in a mouse click, as well as pictures, articles, blogs, and new types of forums: Reddit and 4Chan where the troll culture was slowly brewing.

Simultaneously, a less noticed revolution was unfolding in laboratories and college auditoriums. Deep learning and neural network development was gaining momentum like never before. 

Interest in Artificial Neural Networks (ANNs) was re-energized in the 1980s when the metal–oxide–semiconductors (MOS) and complementary MOS entered the stage. In 2006 a groundbreaking paper A Fast Learning Algorithm for Deep Belief Nets was published heralding the era of deep learning. 

So many ANN architectures were proposed: Residual Network, VGG, SqueezeNet, Recurrent NN, and many others. They were designed to fulfil various tasks. 

For example, STN-OCR can be used for recognizing texts. And a Convolutional Neural Network or CNN is a true champion at recognizing human faces. If your phone has a feature akin to face unlock, then you have a compact version of a CNN right in your gizmo.

The moment of revelation, however, came in 2014. A computer scientist, whose last name is ironically Goodfellow, together with his research team designed a Generative Adversarial Network (GAN) — a framework, in which two neural networks are set to rival each other in a zero-sum game. It was the last straw.

Until then, deepfake-like technology was imprisoned in the special effects studios, almost never crossing the boundary that separated silver screen and the real world. But in the 2010s, when home computers became so powerful, they could serve as multimedia workstations, the movie world’s exclusive monopoly over synthetic media was ruined.  

Pandora’s box was open. GANs could inpaint pictures, change poses of photographed people, age faces, and do other tricks. Face swapping was one of them. And since some GAN architectures were in public access on GitHub, it was a matter of time until someone would try to use them for impish purposes.

Only three years later, in November 2017 a Redditor under an alias Deepfake, who never revealed their identity, posted the first bit of a deepfake video. Of course, it had to be pornography.

Ever since, the world has changed. The Deepfake Accountability Act was introduced, phenomena of liar’s dividend and reality apathy were observed, tumultuous political events occurred in light of deepfake allegations in different corners of the world. And thus, Jean Baudrillard’s prophecy of simulacra was fulfilled.

What’s next?

Even though deepfakes are annoying, and at times dangerous, they failed to become an ultimate psychological weapon that some pessimistic intellectuals expected them to become. Mostly because people aren’t that gullible after all. But also because we have antispoofing on our side. 

Visit https://antispoofing.org/Anti-Spoofing_and_Liveness_Detection to learn more about how we can tackle deepfakes and other various biometric security threats.

Related Articles

Back to top button
Close