Skip to content
Naked Security Naked Security

Facebook’s rating you on how trustworthy you are

You, me, everyone is being rated on a scale between zero and one on signals such as whether we waste Facebook's time by falsely flagging posts as being false.

Are you trustworthy, or are you just another fake news boil that needs to be lanced?
Facebook is figuring that out, having spent the past year developing a previously unreported trustworthiness rating system to assign to users. We Facebook users are being assigned a trustworthiness rating between zero and one, according to an interview by the Washington Post with Tessa Lyons, product manager in charge of fighting misinformation.
This is part of the ongoing battle Silicon Valley is waging with those who’ve been tinkering with social media platforms, from the Russian actors who littered Twitter with propaganda and let loose armies of automated accounts on both it and Facebook, to the fake-news pushers on both ends of the political spectrum.
Facebook has had a bumpy time of it when it comes to fake news.
In April, Facebook started putting some context around the sources of news stories. That includes all news stories: both the sources with good reputations, the junk factories, and the junk-churning bot-armies making money from it.
You might also recall that in March 2017, Facebook started slapping “disputed” flags on what its panel of fact-checkers deemed fishy news.
As it happened, these flags just made things worse. They did nothing to stop the spread of fake news, instead only causing traffic to some disputed stories to skyrocket as a backlash to what some groups saw as an attempt to bury “the truth”.
Last month, Facebook threw in the towel on the notion that it’s going to get rid of misinformation. The thinking: it might be raw sewage, but hey, even raw sewage has a right to flow, right? Instead, it says that it’s demoting it: punishment that extends to Pages and domains that repeatedly share bogus news.
It all came to a head at a press event that was supposed to be feel-good PR: a notion that CNN reporter Oliver Darcy skewered by grilling Facebook Head of News Feed John Hegeman about its decision to allow Alex Jones’ conspiracy news site InfoWars on its platform.
How, Darcy asked, can the company claim to be serious about tackling the problem of misinformation online while simultaneously allowing InfoWars to maintain a page with nearly one million followers on its website?
Hegeman’s reply: the company…

…does not take down false news.

But that doesn’t mean that social media platforms aren’t working to analyze account behavior to spot violative actors. As the Washington Post points out, Twitter is now factoring in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.
Thus, it shouldn’t come as a surprise to learn that Facebook is doing something similar. But just exactly how it’s doing it is – again, no surprise – a mystery. Like all of Facebook’s algorithms – say, the ones that gauge how likely it is we’ll buy stuff, or the ones that try to figure out if we’re using a false identity – the user-trustworthiness one is as opaque as chocolate pudding.
The lack of transparency into how Facebook is judging us doesn’t make it easy for Facebook fact checkers to do their job. One of those fact checkers is First Draft, a research lab within the Harvard Kennedy School that focuses on the impact of misinformation.
Director Claire Wardle told the Washington Post that even though this lack of clarity is tough to deal with, it’s easy to see why Facebook needs to keep its technology close to the vest, given that it can be used to game the platform’s systems:

Not knowing how [Facebook is] judging us is what makes us uncomfortable. But the irony is that they can’t tell us how they are judging us – because if they do, the algorithms that they built will be gamed.

A case in point is the controversy over conservative conspiracy theorist Alex Jones and his InfoWars site, which ultimately wound up with both being banned from Facebook and other social media sites earlier in the month.
It was no clear-cut victory over misinformation, the way that Facebook executives saw it. Rather, they suspected that mass reporting of Jones’s content was part of an effort to game Facebook’s systems.
Lyons told the Washington Post that if people were to report only the posts that were false, her job would be easy. The truth is far more complicated, though. She said that soon after Facebook gave users the ability to report posts they considered to be false, in 2015, she realized that people were flagging posts simply because they didn’t agree with the content.
Those reported posts get forwarded to Facebook’s third-party fact checkers. To use their time efficiently, Lyons’s team needed to figure out whether those who were flagging posts were themselves trustworthy.


One signal that Facebook uses to assess that trustworthiness is how people interact with articles, Lyons said:

For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.

As far as the other signals that Facebook uses to rate us go, the company’s not saying. But fuzzy as the inner workings may be, it’s probably safe to assume that the overall trustworthiness of Facebook content is better off with us being rated than not. Think about it: do you trust the input of outraged grumps who stab at the “report” button just because they don’t agree with something?
No, me neither. And if it takes a trustworthiness rating to demote that kind of behavior, that seems like a fair trade-off.
At least it’s not a trustworthiness score that’s being assigned to us publicly, as if we were products, like the Peeple people proposed a few years back.
Sure, we’re products as far as Facebook is concerned. But at least our score isn’t being stamped on our rumps!


19 Comments

Sounds familiar: (if link does not show, search in Sophos News at top for; china social credit score) https://nakedsecurity.sophos.com/2018/03/29/jaywalkers-to-be-named-shamed-and-fined-thanks-to-facial-recognition/
I’d rather see a new Like icon, the one that was once thought to be Chocolate Yogurt (DeadPool)
So we can just tag it as,, poop. lol

Reply

The big problem is that unpopular views are not necessarily wrong. But, some companies treat certain viewpoints as false simply because of political correctness.
Something that is opinion or belief simply doesn’t belong in the same set of calculations as true/false specifics. We have to be really careful about this.

Reply

But also some things are facts and some things are lies; I think they are more concerned about the black and white stuff. If you are the kind of person who posts nothing but articles about how Obama isn’t a US citizen, and how vaccines are bad… I really hope you do influence the discourse less.

Reply

Gotta be careful. No scientific hypothesis can ever be proven correct. They can only be falsified. If it is tested a lot, but nobody has falsified it, it gains standing and may become a theory. A theory that gets tested even more may eventually become a law.
But, even scientific laws are subject to falsification. An example is Einstein’s falsification of Newtonian Laws. They weren’t quite correct; Einstein showed that those Laws were “off” by a bit in different frames of reference (i.e. Relativity). For instance, time itself flows faster the further out from a gravity source one is. It’s very small, but it’s there.
More to the point on Facebook, popularity should never be used to gauge truth. Copernicus was roundly dismissed for showing the falsity of the then-prevailing Platonic model of the solar system. But, he was right; the calculated numbers were wrong. Still, he was disrespected by the scientists of his day.

Reply

“Not knowing how [Facebook is] judging us is what makes us uncomfortable. But the irony is that they can’t tell us how they are judging us – because if they do, the algorithms that they built will be gamed.”
Quis custodiet ipsos custodes? [Who will guard the guards themselves?]

Reply

Facebook is for a world of hurt now that they are passing judgement on the content over and above just the terms of service. InfoWars will be remembered as the Pearl Harbor of online censorship by private companies.

Reply

Passing judgement on content is what social media (and search) companies do all day long.

Reply

They try to evaluate them according to their TOS, but there’s no way they can police every comment. With some of these suit happy European countries suing for millions per infringement, I don’t expect they’d win that battle.

Reply

Under GDPR if an algorithm is used to create data about us we are allowed to question it?

Reply

I assume you can ask what data it created but I don’t know how far you would get if you insisted on knowing exactly how it was created. You’d imagine that copyright and trade secrecy laws would somehow trump “right to know how”…

Reply

I’m not so sure; A22 of GDPR is fairly extensive:
The GDPR applies to all automated individual decision-making and profiling.
Article 22 of the GDPR has additional rules to protect individuals if you are carrying out solely automated decision-making that has legal or similarly significant effects on them.
You can only carry out this type of decision-making where the decision is:
necessary for the entry into or performance of a contract; or
authorised by Union or Member state law applicable to the controller; or
based on the individual’s explicit consent.
You must identify whether any of your processing falls under Article 22 and, if so, make sure that you:
give individuals information about the processing;
introduce simple ways for them to request human intervention or challenge a decision;
carry out regular checks to make sure that your systems are working as intended.
Ref: UK ICO For organisations > Guide to the General Data Protection Regulation (GDPR) > Individual rights > Rights related to automated decision making including profiling

Reply

Good info… but I wonder if “give individuals information about the processing” would extend to “show them the source code if that’s what they insist they need to see”. Quite how much detail the regulators will consider sufficient remains to be seen. Also, whether any sort of social networking score would be considered as having “legal or similarly significant effects”.
For example, with US border control now asking for social media account names along with the traditional name/address/phone number trio, perhaps your algorithmically-determined trustworthiness in social media terms might become highly significant.
Interesting times!

Reply

“or
based on the individual’s explicit consent.”
Creating or maintaining a Facebook account could be construed (or explicitly defined in the User Agreement) as consent.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!