Sophos News

AI fake porn could cast any of us

In the case of revenge porn, people often ask: If the photos weren’t taken in the first place, how could ex-partners, or hackers who steal nude photos, post them?
Unfortunately, there’s now an answer to that rhetorical question. Forget fake news. We are now in the age of fake porn.
Fake, as in, famous people’s faces – or, for that matter, anybody’s face –  near-seamlessly stitched onto porn videos. As Motherboard reports, you can now find actress Jessica Alba’s face on porn performer Melanie Rios’ body, actress Daisy Ridley’s face on another porn performer’s body and Emma Watson’s face on an actress’s nude body, all on Celeb Jihad – a celebrity porn site that regularly posts celebrity nudes, including stolen/hacked ones.
Here’s Celeb Jihad crowing about a clip of a woman showering:

The never-before-seen video above is from my private collection, and appears to feature Emma Watson fully nude…

The word “appears” is key. It is, rather, an example what’s being called a deepfake.
Motherboard came across the “hobby” of swapping celebrities’ faces onto porn stars’ bodies in December, when it discovered a redditor named “deepfakes” who had made multiple convincing porn videos, including one of “Wonder Woman” star Gal Gadot apparently having sex with her stepbrother.
He also created porn videos with publicly available video footage of Maisie Williams, Scarlett Johansson, and Taylor Swift, among others. Deepfakes posted the hardcore porn videos to Reddit.
He used a machine learning algorithm, his home computer and open-source code.
At that point, the results were a little crude: for example, the image of Gadot’s face didn’t track correctly, Motherboard reported. You’d have to look closely to discern that it was a fake, though – at first blush, it’s quite believable.


Since then, production of AI-assisted fake porn has “exploded,” Motherboard reports. Thousands of people are doing it, and the results are ever more difficult to spot as fakes. You don’t need expertise with sophisticated artificial intelligence (AI) technologies at this point, either, given that another redditor – deepfakeapp – created an app named FakeApp that’s designed to be used by those without computer science training.

All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

We haven’t seen anything yet. Coming weeks will bring deepfakes to all with the mere press of a button, deepfakeapp promised Motherboard in an email exchange.

I think the current version of the app is a good start, but I hope to streamline it even more in the coming days and weeks. Eventually, I want to improve it to the point where prospective users can simply select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.

From the subreddit’s wiki:

This app is intended to allow users to move through the full deepfake creation pipeline – creating training data, training a model, and creating fakes with that model – without the need to install Python and other dependencies or parse code.

As the news about deepfakes has spread to major news outlets, so too has condemnation of the rapidly advancing technology. Motherboard spoke with Deborah Johnson, Professor Emeritus of Applied Ethics at the University of Virginia’s school of engineering, who said that neural-network generated fake videos are undoubtedly going to get so good, it will be impossible to tell the difference between the AI produced fakes and the real thing.

You could argue that what’s new is the degree to which it can be done, or the believability, we’re getting to the point where we can’t distinguish what’s real – but then, we didn’t before. What is new is the fact that it’s now available to everybody, or will be… It’s destabilizing. The whole business of trust and reliability is undermined by this stuff.

One redditor took to the subreddit to address the “distain [sic] and ire” that’s arisen. Part of his argument was an admission that yes, the practice is a moral affront:

To those who condemn the practices of this community, we sympathize with you. What we do here isn’t wholesome or honorable, it’s derogatory, vulgar, and blindsiding to the women that deepfakes works on.

(Note that the technology would of course work on any gender: As Motherboard reports, one user named Z3ROCOOL22 combined video of Hitler with that of Argentina President Mauricio Macri.)
Gravity_Horse goes on to justify deepfake technology. One of his rationales: be thankful that this technology is in the hands of the good guys, not the creepy sextortionists:

While it might seem like the LEAST moral thing to be doing with this technology, I think most of us would rather it be in the hands of benign porn creators shaping the technology to become more focused on internet culture, rather than in the hands of malicious blackhats who would use this technology exclusively to manipulate, extort, and blackmail.

Gravity_Horse, are you serious? What in the world makes you think that a freely available app that’s truly God’s gift to sextortionists will be passed over by malicious actors?
Gravity_Horse also argues that this is nothing new. Photoshop has been around for years, he says, and it hasn’t resulted in courts being flooded with faked evidence or in the creation of believable revenge porn photos. Technology can create believable fakes, but it can also detect the forgeries, he argues.
Finally, Gravity_Horse argues that, ironically, as deepfakes proliferate and become mainstream…

Revenge porn actually becomes LESS effective, not more effective. If anything can be real, nothing is real. Even legitimate homemade sex movies used as revenge porn can be waved off as fakes as this system becomes more relevant.

Lord, I hope you’re right about that, because this genie is out of the bottle.