Skip to main content

The dawn of deepfakes

Eline Lincklaen Arriëns

Over the last few weeks, we have already published articles around fake news and fake data, and our civic responsibility to use data to combat the spread of misinformation. This week’s piece will centre around how artificial intelligence (AI) and data are also, unfortunately, powering the rise of deepfake. Deepfake is a synthetic media where a person in an existing image or video is replaced with someone else’s likeness1.I find this terrifying. Let’s deep dive a bit for more context.

We are in an era where technology can create “lifelike” images and sounds from scratch. This is well beyond “photoshopping” a picture of – say – a fashion model to correct an imperfection. Technology today can already build near original media, without any clear connection to real images or sounds.  Notable and ‘popular’ examples are businesses that sell fake people with images generated by AI. For instance, on the website Generated Photos2, users can buy a picture of a unique face from across the age spectrum (children to elderly), from different ethnicities and with varying facial expressions. One fake image cost $2.99, and 1,000 images would be a dollar a pop ($1,000 total). Just need one image for a character in a video game and want it for free? You can get a photo at Want an animated image and to see and hear that generated person speak? Contact the company Rosebud3.The same technology may be available soon to any of us using a conventional laptop, as Adobe demonstrated already four years ago4.

As this technology develops, people can make videos that can literally make anyone say anything at any point in time5. Let me stress this point: technology has made it incredibly easy to put words in someone else’s mouth. Notable examples are: a video of Barack Obama calling Donald Trump “a total and complete [expletive] ” (this video was created with the specific intent of increasing awareness around fake news and of what is being shared on the Internet)6; Mark Zuckerberg bragging about having “total control of billions of people’s stolen data”; and Jay-Z rapping Billy Joel’s “We Didn’t Start the Fire” and Hamlet’s “To be or not to be” soliloquy7. As technology continues to develop and the AI behind these get smarter as more data is being shared and fed to the algorithms, it will become increasingly difficult to gauge what is real and what isn’t. If people are not vigilant about what is on the internet and question the sources they are viewing, we could all spiral down into a chasm of misinformation and fake news within moments. To hit this point home, image if someone created a video of Joe Biden, Angela Merkel, Jacinda Ardern or Justin Trudeau saying something derogatory about a minority or announcing that COVID-19 was a hoax. How would the U.S., Germany, New Zealand and Canada, let alone the world, react? Would we believe statements that came afterwards calling it fake? Who would the countries’ press officials manage to persuade, and who would still prefer to believe the fake video first?

As the AIs behind these deepfakes improve and they become more convincing and common, what can we do to ensure that what we are seeing is real? How will we know if we can trust what we’re seeing? The sad reality I’ve reached is that, as more data is being shared and continues to contribute to perfect the AIs behind the deepfakes, there isn’t much we can do about it.

Passionate about the topic and want to discuss it further? Join our conversation on fake news, fake data and deepfakes on the SCDS forum

This person does not exist
Image credit:
(C) 2020,