Well, that didn’t work out like we thought it would, Facebook said last month about the “Disputed” tag, which has now been mothballed.
Since March, Facebook has been slapping disputed flags on what some of us call fake news and what others call the stories that mainstream news outlets with hidden agendas want to suffocate.
Aside from whether or not you agree with the use of disputed tags – tags that Facebook’s been allocating with the input of third-party fact-checkers such as Snopes, ABC News, Politifact, FactCheck and the Associated Press – there’s one thing that’s become clear: the tags haven’t been doing squat to stop the spread of fake news.
In fact, at least one publisher of admittedly fake news (he was eventually conscience-panged out of the lucrative business) has noted that fake news goes viral way before Facebook systems, partners or users have a chance to report it. Then there’s a consequence that seems obvious in hindsight: traffic to some articles flagged as fake has skyrocketed as a backlash to what some groups see as an attempt to bury the “truth”.
You can imagine: Hey! Facebook is trying to silence this blog! It says we shouldn’t share it! Well, in your FACE, Facebook: Share! Share! Share!
Jeff Smith, Facebook Product Designer, Grace Jackson, User Experience Researcher, and Seetha Raj, Content Strategist, said in a more detailed post published on Medium that Facebook found these four ways that disputed tags could be improved:
- Disputed flags buried critical information: Although the disputed flag alerted someone that fact-checkers disputed the article, it wasn’t easy for people to understand exactly what was false. It required too many clicks, and those additional clicks made it harder for people to see what the fact-checkers had said about the disputed story.
- Disputed flags could sometimes backfire: We learned that dispelling misinformation is challenging. Just because something is marked as “false” or “disputed” doesn’t necessarily mean we will be able to change someone’s opinion about its accuracy. In fact, some research suggests that strong language or visualizations (like a bright red flag) can backfire and further entrench someone’s beliefs.
- Disputed flags required at least two fact-checkers: Disputed flags were only applied after two third-party fact-checking organizations determined an article was false because it was a strong visual signal and we wanted to set a high bar for where we applied it. Requiring two false ratings slowed down our ability to provide additional context and often meant that we weren’t able to do so at all. This is particularly problematic in countries with very few fact-checkers, where the volume of potentially false news stories and the limited capacity of the fact-checkers made it difficult for us to get ratings from multiple fact-checkers.
- Disputed flags only worked for false ratings: Some of our fact-checking partners use a range of ratings. For example, they might use “false,” “partly false,” “unproven,” and “true.” We only applied Disputed flags to “false” ratings because it was a strong visual signal, but people wanted more context regardless of the rating. There are also the rare circumstances when two fact-checking organizations disagree about the rating for a given article. Giving people all of this information can help them make more informed decisions about what they read, trust, and share.
Mind you, Facebook told the Guardian in May that the disputed flag was leading to decreased traffic and sharing. Some of the publishers of disputed news echoed that. But neither Facebook nor those publishers coughed up much detail on the supposedly reduced traffic.
On 20 December, Facebook Product Manager Tessa Lyons said in a blog post that the company is swapping out the disputed tags because they were working about as well as waving a red flag in front of a raging bull.
In fact, if a fake-news fan sees that type of image, they’re likely to dig in deeper, she says:
Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.
So instead, Facebook is going to post a nice, bland, mild-mannered, black and white and completely not a red flag selection of Related Articles, to offer users a bit more context about a given disputed article.
The social media behemoth actually launched Related Articles in 2013 to offer up new articles – in News Feed – that people may find interesting about a given topic after they’re already read an article. In April 2017, it began to test Related Articles that might appear before visitors read an article shared in News Feed. The articles appear in a box below the link and were designed to provide “additional perspectives and information,” including articles by Facebook’s third-party fact checker partners.
Instead of a red flag, the Related Articles are simply about putting news into context. Since April, Lyons says, they’ve proved more effective at dampening shares of fake news:
Related Articles… are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts. Indeed, we’ve found that when we show Related Articles next to a false news story, it leads to fewer shares than when the Disputed Flag is shown.
There are those who’ve questioned Facebook’s sincerity with regards to turning off the spigot of marketing revenues that flow with fake news. But Facebook swears it’s truly committed to keeping fake news out, given that it “undermines the unique value that Facebook offers: the ability for you to connect with family and friends in meaningful ways.” That’s why it’s putting better tech and more people on the problem, Lyons says.
And it is indeed having an effect, she says:
Overall, we’re making progress. Demoting false news (as identified by fact-checkers) is one of our best weapons because demoted articles typically lose 80 percent of their traffic. This destroys the economic incentives spammers and troll farms have to generate these articles in the first place.
What kind of economic incentives, you well may ask? Well, you can have a chat with Russian troll Jenna Abrams and her 2,752 troll factory friends for the details. A taste: according to one former troll factory employee, $2.3 million was spent over two years, with up to 90 employees making about USD $846 (50,000 roubles or £650)/month.
But while Facebook’s fight against fake news means it’s leaving money on the table, it could also spare it the finacial lashings of countries that have had it up to here with fake news.
In December 2016, for example, Germany threatened Facebook with a €500,000 fine per fake news post.
It did so amid fears that its German election campaign would turn into a Trump election-like circus, “hijacked by news peddlers, conspiracy theorists, racist ideologues, trolls and cyber-bullies,” as the Financial Times put it.
The UK, France and the Netherlands have had similar fears.
Hopefully, the more neutral approach of giving context, plus the lack of a strong, red-flag graphic image, will do a better job at keeping fake news from spreading like wildfire and will keep such countries’ election campaigns on a more rational, less circusy footing.
Mahhn
They could add – “Not Verified/Verified” to stuff if they wanted to.
No expectations from FB for anything though, as last week they 3 times blocked content I was posting – for spam – linked directly to News reports (one was NBC) which I expected to be okay. So it’s back to posting memes and sub accurate crap…. unless they will share what their “approved” list of news sites – if it’s CNN, I’d rather not post.
Laurence Marks
> In April 2017, it began to test Related Articles that might appear before visitors read an article shared in News Feed.
That will work about as well as Amazon’s related products. When I look for a starter for a 2006 Chevy, it will also show me one for a 2001 Dodge!