Skip to content
Naked Security Naked Security

Deepfakes have doubled, overwhelmingly targeting women

Deepfake tech has push-button apps and service portals. Can code commodification do the same for detection, so women can actually afford it?

OK, let’s pull deepfakes back from the nail-biting, perhaps hyperbolic, definitely hyperventilating, supposed threats to politicians and focus on who’s really being victimized.

Unsurprisingly enough, according to a new report, that would be women.

96% of the deepfakes being created in the first half of the year were pornography, mostly being nonconsensual, mostly casting celebrities – without compensation to the actors, let alone their permission.

The report, titled The State of Deepfakes, was issued last month by Deeptrace: an Amsterdam-based company that uses deep learning and computer vision for detecting and monitoring deepfakes and which says its mission is “to protect individuals and organizations from the damaging impacts of AI-generated synthetic media.”

According to Deeptrace, the number of deepfake videos almost doubled over the seven months leading up to July 2019, to 14,678. The growth is supported by the increased commodification of tools and services that enable non-experts to churn out deepfakes.

One recent example was DeepNude, an app that used a family of dueling computer programs known as generative adversarial networks (GANs): machine learning systems that pit neural networks against each other in order to generate convincing photos of people who don’t exist. DeepNude not only advanced the technology, it also put it into an app that anybody could use to strip off (mostly women’s) clothes so as to generate a deepfake nudie within 30 seconds.

We saw another faceswapping app, Zao, rocket to the top of China’s app stores last month, sparking a privacy backlash and just as quickly getting itself banned from China’s top messaging app service, WeChat.

While Deeptrace says most deepfakes are coming from English-speaking countries, it says it’s not surprising that it’s seeing “a significant contribution to the creation and use of synthetic media tools” from web users in China and South Korea.

Deeptrace says that non-consensual deepfake pornography accounted for 96% of the total number of deepfake videos online. Since February 2018 when the first porn deepfake site was registered, the top four deepfake porn sites received more than 134 million views on videos targeting hundreds of female celebrities worldwide, the firm said. That illustrates what will surprise approximately 0% of people: that deepfake porn has a healthy market.

History lesson

As Deeptrace tells it, the term ‘deepfake’ was first coined by the Reddit user u/deepfakes, who created a Reddit forum of the same name on 2 November 2017. This forum was dedicated to the creation and use of deep learning software for synthetically faceswapping female celebrities into pornographic videos.

Reddit banned /r/Deepfakes in February 2018 – along with Pornhub and Twitter – but the faceswap source code, having been donated to the open-source community and uploaded on GitHub, seeded multiple project forks, with programmers continually improving quality, efficiency, and usability of new code libraries.

Since then, we’ve seen faceswapping apps as well as one app for synthetic voice cloning (and one business get scammed by a deepfake CEO voice that talked an underling into a fraudulent $243,000 transfer).

Most of the apps require programming ability, plus a powerful graphics processor to operate effectively. Even here, though, the technology is growing more accessible, with several detailed tutorials being created with step-by-step guides for using the most popular deepfake apps, and recent updates having improved the accessibility of several GUIs.

Deeptrace says there are now also service portals for generating and selling custom deepfakes. In most cases, customers have to upload photos or videos of their chosen subjects for deepfake generation. One service portal Deeptrace identified required 250 photos of the target subject and two days of processing to generate the deepfake. The prices of the services vary, depending on the quality and duration of the video requested, but can cost as little as $2.99 per deepfake video generated, Deeptrace says.

The DeepNude app got pushed offline and has actually turned into a case study when it comes to deepfake service portals. In spite of the authors saying that they’d “greatly underestimated the volume of download requests” and crying out that “the world is not ready for DeepNude,” the world showed that it was actually hot-as-a-hornet ready.

The open-source code was subsequently cracked, independently repackaged and distributed through various online channels, such as open-source repositories and torrenting websites, and has spawned the opening of two new service portals offering allegedly improved versions of the original DeepNude. Charges range from $1 per photo to $20 for a month’s unlimited access.

Oh, I guess the world is ready for DeepNudes, said the original creators, who were also ready to line their pockets, given that they put DeepNude up for sale on 19 July 2019 for $30,000 via an online business marketplace, where it sold to an anonymous buyer.

Well, yikes, Deeptrace said. That was a disaster in the making – at least to women, if not for the $30K richer DeepNude creators:

The moment DeepNude was made available to download it was out of the creators’ control, and is now highly difficult to remove from circulation. The software will likely continue to spread and mutate like a virus, making a popular tool for creating non-consensual deepfake pornography of women easily accessible and difficult to counter.

Verified deepfakes include an art project that turned Facebook CEO Mark Zuckerberg into Mark Zucker-borg: the CEO’s evil deepfake twin who implied that he’s in total control of billions of people’s stolen data and ready to control the future.

As well, we’ve seen enhanced fake digital identities used in fraud, infiltration and espionage.

Besides the voice deepfake, we’ve also seen LinkedIn deepfake personas: one such was ”Katie Jones”, an extremely well-connected redhead and purportedly a Russia and Eurasia Fellow at the top think-tank Center for Strategic and International Studies (CSIS) who was eager to add you to her professional network of people to spy on.

The top 10 women most exploited in deepfakes

Deeptrace didn’t publish the names of the women most often cast in nonconsensual deepfake porn, but they did list them by nationality and profession. Most are from Western countries, including a British actress who appeared in 257 nonconsensual porn videos.

But the second and third most frequently targeted women, as well as the most frequently viewed one, are South Korean K-pop singers.

The conclusions

Deepfakes are posing a range of threats, Deeptrace concludes. Just the awareness of deepfakes alone is destabilizing political processes, given that the credibility of videos featuring politicians and public figures is slipping – even in the absence of any forensic evidence that they’ve been manipulated.

The tools have been commodified, which means that we’ll likely see increased use of deepfakes by scammers looking to boost the credibility of their social engineering fraud, and by fake personas as tools to conduct espionage on platforms such as LinkedIn.

What deepfakes are really about

Political intrigue and falsified identities as a means to conduct espionage or fraud are scary prospects, but in the greater scheme of things, they’re just a drop in the bucket when it comes to the harm being done by deepfakes.

Henry Ajder, head of research analysis at Deeptrace, told the BBC that much of the discussion of deepfakes misses the mark. The real victims aren’t corporations or governments, but rather women:

The debate is all about the politics or fraud and a near-term threat, but a lot of people are forgetting that deepfake pornography is a very real, very current phenomenon that is harming a lot of women.

Now’s the time to act, the Deeptrace report said:

The speed of the developments surrounding deepfakes means this landscape is constantly shifting, with rapidly materializing threats resulting in increased scale and impact.

Some of the internet’s biggest players are already on board with that and have been for a while. Google, for example, recently produced and released thousands of deepfakes in order to aid detection efforts.

Yet another data set of deepfakes is in the works, this one from Facebook. Last month, the platform announced that it was launching a $10m deepfake detection project, and is going to make the data set available to researchers.

That’s all well and good, but at least one expert has questioned whether deepfake-detection technology is a worthwhile effort. The BBC talked to Katja Bego, principal researcher at innovation foundation Nesta, who noted that it’s not much use to flag a video as fake after it’s already gone viral:

A viral video can reach an audience of millions and make headlines within a matter of hours. A technological arbiter telling us the video was doctored after the fact might simply be too little too late.

Besides the too little, too late critique of detection technology lies a simple socioeconomic fact that the BBC pointed out: namely, with hundreds or thousands of women being victimized, how likely is it that they’ll be able to afford to hire specialists who can pick apart the deepfakes that are being used to exploit them?

Deepface technology is growing and flourishing due to commodification and open-sourcing of code that’s led to push-button apps and service portals. Will the same forces lead to push-button apps and portals that can strip a deepfake video down to expose it as a fake?

Let’s hope the makers of fraud detection are thinking along those lines, as in, how can we use these detection technologies to undo the harm being done to the real victims? Deeptrace, I’m talking to you. What technologies will you, and others in this field, bring to the majority of deepfake victim… in a way that they can afford?

3 Comments

Most disturbing to me is that it enables politicians to disavow videos of things they did say or do. The perfect way to muddy the waters; which seems to be a common tactic nowadays.

If knowledge of the existence of deep fakes becomes sufficiently widespread, at which point do we assume that the videos of victims who say “it’s a deep fake” are actually a deep fake?

I can imagine jealous current boyfriends or spouses being a little suspicious.

But I guess here’s a good signal to process… did you go looking for videos on a porn site, or was it forwarded to you?

If you went looking, when you are in a relationship, then, maybe you have no moral high ground irrespective of how genuine or not the video is.

If it was forwarded to you, seriously, what are the chances it was genuine? I’d have thought pretty low.

I think calling a fake picture of someone “exploitation” is also hyperbolic nail-biting. Here’s a terrifying fact: 100% of women have been nonconsensually exploited in this way in people’s imaginations already. And in fact, fantasizing about strangers without their consent may even be more commonly perpetrated by women than men! Adding realistic-looking images, or diary entries that don’t explicitly acknowledge that they’re fiction, might ramp up the creep factor a bit, but they don’t really change the game or represent any profound shift in reality.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?