Opinion | Be very afraid of the dangers of ‘deepfake’ technology

Photo: iStock

The Lok Sabha elections are upon us and we will soon be inundated on our social media apps by Photoshopped images of politicians showing them doing something silly or despicable. None of us will be spared. We have enough gullible friends and uncles who will forward them to us. Some of these images will go viral.

Of course, the brighter ones among us will not be easily deceived. The fakery of most Photoshopped images is not very difficult to detect. For instance, a fake picture of a newspaper front page, on close examination, will almost invariably reveal that the font of a changed word or phrase does not exactly match the font of the rest of the headline. Or that the head of a politician does not perfectly fit his neck. All you need is some scepticism and not blind faith in whatever comes your way that fits in with your political inclinations.

However, there is a far more sinister technology looming. This is “deepfake” technology, driven by Artificial Intelligence (AI), which produces videos that have real people saying and doing fictitious things. They are very difficult to detect.

This is how it is done. One has to collect a large number of images or videos of a real person and feed them into the program. It does a detailed 3-D modelling of the person’s face, including different expressions, skin texture, creases and wrinkles. Then you can make the person smile, frown, say anything, and transplant his or her face into an existing video.

For a better understanding of how deepfake videos are made, you can watch this TED talk by AI scientist Supasorn Suwajanakorn (goo.gl/mmcGMY). Suwajanakorn took an existing video of Barack Obama and worked on his lips to have him deliver a completely different message from what he had said.

The first popular use of deepfakes was in the porn video industry, where porn actors’ faces were replaced with celebrity faces. All such videos that were detected have been taken down. However, the real danger of deepfake technology lies in the areas of justice, news and politics.

How will judges be able to decide what is real video evidence and what is fake? We are already in an era where much of the time we are quite unsure what the truth is. Deepfake technology in the hands of irresponsible journalists could have deleterious implications. In any case, the very nature of journalism and how we get our news has changed. With the explosion of TV news channels and the resulting intense competition, media outlets are more willing than ever to air sensational stuff. Besides, over the last decade, more and more of us are getting our information from social media platforms, where a vast number of users generate relatively unfiltered content. We also tend to align our social media experiences with our own perspectives, which the algorithms are built to encourage, and turn our social media feeds into echo chambers. So we tend to pass along information without bothering to check if it is true, making it appear more credible in the process. Falsehoods are spreading faster than ever before.

This is what makes deepfakes extremely dangerous. What could happen if this technology fell into the hands of maniacs? Imagine a deepfake video of a prominent Indian politician ordering the mass slaughter of a community, or the leader of a foreign power ordering a nuclear strike against India. Politicians, especially, are easy bait for deepfakes, as they are often recorded giving speeches while standing stationary at a podium, so only the lip movements have to be synchronised with the fake audio.

In fact, crude deepfake software is already available for free download on the internet.

In 2016, at an Adobe conference, the American software company unveiled a beta version of Adobe Voco. The company claimed that if the software were fed 20 minutes of anyone speaking, it could generate pitch-perfect audio in that person’s voice according to any script. There was an immediate uproar over the ethics of this and the company has so far refrained from releasing the software commercially. However, the world is not short of talented human mimics.

It cuts the other way too. A politician could actually make a disgustingly communal or inflammatory statement and then claim that it was a fake video.

Of course, scientists are hard at work to find ways to detect deepfake videos. In fact, Suwajanakorn himself is part of such an effort, to find flaws in his own creation.

Scientists at the State University of New York, Albany, studied hundreds of deepfake videos built from images and found that the men and women in these videos did not blink—after all, you rarely photograph a person with his eyes closed. However, a few weeks after the scientists put a draft of their paper online, they got anonymous emails with links to deepfake videos whose protagonists opened and closed their eyes more normally. The fake content creators had upped their game by including face images with closed eyes or using video sequences for training the algorithms.

The war is on. Ultimately, an AI has to be developed to fight this AI. Till then, we should be very afraid.

Sandipan Deb is a former editor of ‘Financial Express’, and founder-editor of ‘Open’ and ‘Swarajya’ magazines.

[“source=livemint”]