Launched on Friday and viral practically right off the bat, the brand-new, AI-outfitted, deepfake face-swapping app Zao can swap users’ photos to those of celebrities zippity quick.
And just as fast as greased lightning, the app got itself banned from China’s top messaging app service, WeChat, after its meteoric rise in China’s app stores was countered by a fierce privacy backlash.
Sina Technology reports that on Sunday, the company behind the Zao mobile app had posted onto Weibo – China’s Twitter-like microblogging service – an apology and a request to please give it some time to figure out privacy issues.
Forbes gave this translation:
We thoroughly understand the anxiety people have towards privacy concerns. We have received the questions you have sent us. We will correct the areas we have not considered and require some time.
Regardless of that apology and a tweak to Zao’s originally “we own your stuff forever” terms of service, that same day, WeChat banned the posting of any external links shared from Zao, saying that…
The app has security risks.
The Yr-content-R-Ours Frevr ToS
Zao’s original terms of service (ToS) allowed users to upload video clips for face-swapping with TV and movie stars, but the app maker would be keeping that stuff forever and doing whatever it wanted with it. This is what the ToS originally said, according to KrAsia:
Before uploading and posting your content, you grant us (Zao, Zao-related firms and Zao users) a worldwide, royalty-free, irrevocable, perpetual, transferable sub-licensable license to use.
Wang Jun, a lawyer at Beijing TA Law Firm, told Sina Tech News that from a legal standpoint, the clause wasn’t valid. According to industry observers, that kind of language is a CYA clause: an attempt to shift liability for any possible content rights violations onto users who might upload footage over which they don’t own copyright claim.
It’s certainly possible to do so, as was discovered by a French security researcher who goes by the Twitter alias Elliot Alderson (a reference to the Mr. Robot TV show). Zao was only available to Chinese people when it debuted, but the researcher said that they’d managed to get an account. The researcher posted an uploaded clip from the TV show The Big Bang Theory, featuring what one assumes is their face, pasted over that of character Sheldon Cooper:
It’s time for a thread about #ZAO, the new Chinese app which blew up since Friday. The app is accessible only to Chinese people for the moment but I managed to get an account ;)— Baptiste Robert (@fs0c131y) September 2, 2019
This "AI facial" app allows you to add your face on predefined clip.
Over the weekend, Zao updated its ToS, adding a special notification that states that it wouldn’t use uploaded content in any other forms besides face-swapping without user’s permission, and that Zao would erase data from its servers if users delete their uploads, KrAsia reports.
Besides issues of who owns, or is liable for, user-uploaded content, another aspect of the backlash that greeted Zao has to do with the widespread adoption of facial scan payment in China. If users are uploading their photos to a public platform such as Zao, might that set them up for financial identity theft?
Alipay, one of China’s biggest digital payment platforms, on Sunday responded to Zao’s appearance by reassuring users that its security checks for facial recognition payment couldn’t be fooled by current face-swapping apps.
Besides, insurance would cover any potential losses from ID theft, Alipay said. KrAsia quoted a statement on the matter that Alipay posted to Weibo:
Even if there is a very minor probability that an incident of identity theft occurs, such a loss will be fully covered by insurance.
Cold comfort? Insurance coverage is irrelevant when it comes to scenarios in which identity theft leads to damage that’s not strictly financial – for example, use of deepfakes in nonconsensual porn.
Such usage was recently criminalized in the US state of Virginia. Other states, and Congress, are considering or have already passed similar laws.
Users’ reactions to Zao’s ToS sound an awful lot like those that arose after somebody noticed, in July, the same type of language in the ToS of an app called FaceApp. Like Zao, it too is a face-swapping app. You might have seen friends posting (very convincing) photos of themselves created by the mobile app after FaceApp aged them prematurely.
Launched in 2017, FaceApp (which isn’t associated with Facebook) is an iOS and Android app from Russian company Wireless Lab that lets you upload a selfie and then manipulates it for you, changing your facial expression, age, and even your gender.
It quietly went about churning out those manipulated images until somebody noticed that the company was claiming complete rights to the photos it processed. Then, it went viral, and outrage set the internet abuzz. Not just because of the Draconian grab for the rights of users content, but also because FaceApp’s location in Russia raised questions regarding how and when the company provides access to the data of US citizens to third parties, including, potentially, to foreign governments.
The app, and its being based in Russia, piqued interest high up: Senator Chuck Schumer wrote to the chair of the Federal Trade Commission (FTC), demanding that the FTC and the FBI look into the app’s national and security risks.
Concern over Zao’s ownership by Momo
If ToS and potential identity theft aren’t enough reasons for the privacy-minded to take quick notice of Zao – a whole lot faster than anybody noticed similar problems with FaceApp – its ownership provides yet another, Chinese media reports.
Zao is backed by Momo, a location-based mobile social app that lets strangers meet each other through videos, text, voice, photos, and their proximity via geographic location data. It’s a way to help you build “true, effective, and healthy social relationships,” Momo says – or perhaps a fake and potentially dangerous relationship with a stalker who can see your personal data and exactly where you are.
Zao’s ratings have reportedly dropped in the App Store since the privacy/copyright issues arose over the weekend.
Users obviously adore these photo-manipulation deepfake apps. They keep coming out, and they keep going viral. What’s encouraging to see in the continued progression of such apps: people are actually taking the time to read the terms and conditions.
Let’s keep that up!