Facebook has banned deepfakes.
No, strike that – make it, Facebook has banned some doctored videos, but only the ones made with fancy-schmancy technologies, such as artificial intelligence (AI), in a way that an average person wouldn’t easily spot.
What the policy doesn’t appear to cover: videos made with simple video-editing software, or what disinformation researchers call “cheapfakes” or “shallowfakes.”
The new policy
Facebook laid out its new policy in a blog post on Monday. Monika Bickert, the company’s vice president for global policy management, said that while these videos are still rare, they present “a significant challenge for our industry and society as their use increases.”
She said that going forward, Facebook is going to remove “misleading manipulated media” that’s been “edited or synthesized” beyond minor clarity/quality tweaks, in ways that an average person can’t detect and which would depict subjects as convincingly saying words that they actually didn’t utter.
Another criteria for removal is that part about fancy-schmany editing techniques, when a video…
…is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Deepfake non-consensual porn made up 96% of the total number of deepfake videos online as of the first half of 2019, according to Deeptrace, a company that uses deep learning and computer vision for detecting and monitoring deepfakes.
As far as Facebook policy is concerned, those are redundant. The platform already forbids adult nudity and sexual activity.
Facebook will be using its own staff, as well as independent fact-checkers, to judge a video’s authenticity.
Facebook says it won’t take down slurring Pelosi cheapfake
Given the latitude the new policy gives to satire, parody, or videos altered with simple/cheapo technologies, it might mean that some pretty infamous, and widely shared, cheapfakes will be given a pass and left on the platform.
Which, as the Washington Post notes, could mean that a video that, say, got slowed down by 75% – as was the one that made House Speaker Nancy Pelosi look drunk or ill – may pass muster.
In fact, Facebook confirmed to Reuters that the shallowfake Pelosi video isn’t going anywhere. In spite of the thrashing critics gave Facebook for refusing to delete the video – which went viral after being posted in May 2019 – Facebook said in a statement that it didn’t meet the standards of the new policy, since it wasn’t created with AI:
The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.
Drew Hammill, Pelosi’s Deputy Chief of Staff, criticized Facebook’s new policy, saying that it misses the mark when it comes to tackling fake news.
Nor does Facebook seem to be ready to censure cheapfakes that are the result of mislabeled footage, spliced dialogue or quotes taken out of context. Last week, we saw one such: a heavily edited video that made presidential candidate Joe Biden come off like a white nationalist. The video went viral on Thursday, with at least one Twitter share reportedly being retweeted more than 1 million times.
On Tuesday, Bill Russo, Joe Biden’s former 2020 spokesman, dubbed the new policy an “illusion of progress.” The Post quoted him:
Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created.