Site icon Sophos News

Facebook’s rating you on how trustworthy you are

Are you trustworthy, or are you just another fake news boil that needs to be lanced?
Facebook is figuring that out, having spent the past year developing a previously unreported trustworthiness rating system to assign to users. We Facebook users are being assigned a trustworthiness rating between zero and one, according to an interview by the Washington Post with Tessa Lyons, product manager in charge of fighting misinformation.
This is part of the ongoing battle Silicon Valley is waging with those who’ve been tinkering with social media platforms, from the Russian actors who littered Twitter with propaganda and let loose armies of automated accounts on both it and Facebook, to the fake-news pushers on both ends of the political spectrum.
Facebook has had a bumpy time of it when it comes to fake news.
In April, Facebook started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.
You might also recall that in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.
As it happened, these flags just made things worse. They did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.
Last month, Facebook threw in the towel on the notion that it’s going to get rid of misinformation. The thinking: it might be raw sewage, but hey, even raw sewage has a right to flow, right? Instead, it says that it’s demoting it: punishment that extends to Pages and domains that repeatedly share bogus news.
It all came to a head at a press event that was supposed to be feel-good PR: a notion that CNN reporter Oliver Darcy skewered by grilling Facebook Head of News Feed John Hegeman about its decision to allow Alex Jones’ conspiracy news site InfoWars on its platform.
How, Darcy asked, can the company claim to be serious about tackling the problem of misinformation online while simultaneously allowing InfoWars to maintain a page with nearly one million followers on its website?
Hegeman’s reply: the company…

…does not take down false news.

But that doesn’t mean that social media platforms aren’t working to analyze account behavior to spot violative actors. As the Washington Post points out, Twitter is now factoring in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.
Thus, it shouldn’t come as a surprise to learn that Facebook is doing something similar. But just exactly how it’s doing it is – again, no surprise – a mystery. Like all of Facebook’s algorithms – say, the ones that gauge how likely it is we’ll buy stuff, or the ones that try to figure out if we’re using a false identity – the user-trustworthiness one is as opaque as chocolate pudding.
The lack of transparency into how Facebook is judging us doesn’t make it easy for Facebook fact checkers to do their job. One of those fact checkers is First Draft, a research lab within the Harvard Kennedy School that focuses on the impact of misinformation.
Director Claire Wardle told the Washington Post that even though this lack of clarity is tough to deal with, it’s easy to see why Facebook needs to keep its technology close to the vest, given that it can be used to game the platform’s systems:

Not knowing how [Facebook is] judging us is what makes us uncomfortable. But the irony is that they can’t tell us how they are judging us – because if they do, the algorithms that they built will be gamed.

A case in point is the controversy over conservative conspiracy theorist Alex Jones and his InfoWars site, which ultimately wound up with both being banned from Facebook and other social media sites earlier in the month.
It was no clear-cut victory over misinformation, the way that Facebook executives saw it. Rather, they suspected that mass reporting of Jones’s content was part of an effort to game Facebook’s systems.
Lyons told the Washington Post that if people were to report only the posts that were false, her job would be easy. The truth is far more complicated, though. She said that soon after Facebook gave users the ability to report posts they considered to be false, in 2015, she realized that people were flagging posts simply because they didn’t agree with the content.
Those reported posts get forwarded to Facebook’s third-party fact checkers. To use their time efficiently, Lyons’s team needed to figure out whether those who were flagging posts were themselves trustworthy.


One signal that Facebook uses to assess that trustworthiness is how people interact with articles, Lyons said:

For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.

As far as the other signals that Facebook uses to rate us go, the company’s not saying. But fuzzy as the inner workings may be, it’s probably safe to assume that the overall trustworthiness of Facebook content is better off with us being rated than not. Think about it: do you trust the input of outraged grumps who stab at the “report” button just because they don’t agree with something?
No, me neither. And if it takes a trustworthiness rating to demote that kind of behavior, that seems like a fair trade-off.
At least it’s not a trustworthiness score that’s being assigned to us publicly, as if we were products, like the Peeple people proposed a few years back.
Sure, we’re products as far as Facebook is concerned. But at least our score isn’t being stamped on our rumps!


Exit mobile version