Another whistleblower, this time a former Facebook insider, has told British MPs that covert, “utterly horrifying” data harvesting has been routine at the platform, that we’re likely talking about hundreds of millions of Facebook users affected by apps such as the one Cambridge Analytica (CA) got data from, and that Facebook has a history of hiding its head in the sand, likely frightened of being found liable for what it’s enabled developers to do with user data.
The first whistleblower was CA founder Christopher Wylie, who worked with Cambridge University professor Aleksandr Kogan to obtain the data used to create a tool that could be used to profile voters and influence the 2016 presidential election and Brexit campaign. Kogan has been linked to previously undisclosed Russian affiliations.
The latest whistleblower is Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012. He told The Guardian that he’s disheartened by recent disclosures about how the company Global Science Research scraped tens of millions of Facebook profiles and gave the data to CA. CA is a voter-profiling company whose “psychographic profiling” was funded and used by conservative investors.
Facebook got itself into this mess, Parakilas told British MPs on Wednesday. He tried to warn the company: in fact, he shared his concerns with Facebook execs who were “among the top five” in the company, he said while giving evidence to Parliament’s Digital, Culture, Media and Sport committee.
(The Guardian has provided a summary of salient points from his evidence, given via video link.)
CA wasn’t an anomaly, Parakilas said. Other firms have likely exploited the same terms as CA did, and they likely took advantage of the fact that Facebook was hands-off. Parakilas got the impression that Facebook seemed to fear that an investigation into all the unvetted developers who’ve been given access to Facebook servers and the user data therein could lead to liability over policies or laws being broken in data breaches, he said.
If Facebook didn’t know, if it didn’t investigate what developers were up to, it could then claim that it was just a platform and hence not liable, Parakilas said. That sounds familiar: the “we’re just a platform, not a publisher” refrain was heard repeatedly during at least the start of the furor over fake news being disseminated on Facebook.
He had warned execs that this lax approach to security and utter lack of developer oversight could lead to a major breach like the CA one that exploded over the weekend, Parakilas told The Guardian:
My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data.
When asked about what kind of control Facebook had over data accessed by outside developers, Parakilas said “none”:
Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.
In fact, Parakilas always assumed “there was something of a black market” for Facebook data passed to external developers. When he told other execs that the company should proactively “audit developers directly and see what’s going on with the data,” execs told him to back off: he probably wouldn’t like what he’d see if he overturned that rock.
The gist of one Facebook executive’s response, he said:
Do you really want to see what you’ll find?
Parakilas’s interpretation of the comment:
Facebook was in a stronger legal position if it didn’t know about the abuse that was happening.
They felt that it was better not to know. I found that utterly shocking and horrifying.
Parakilas’ attempts to warn Facebook included a PowerPoint presentation he gave to senior execs in 2012. It included what he said was “a map of the vulnerabilities for user data on Facebook’s platform”.
I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data. On the list of bad actors I included foreign state actors and data brokers.
Fed up by the lack of change, he left the company in 2012. Since then, he’s kept his concerns to himself. But that changed once Facebook lawyers gave testimony to the US Congress late last year about Russia’s attempt to sway the 2016 presidential election.
They treated it like a PR exercise. They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.
In November, he wrote a scathing opinion piece published by the New York Times. In it, he said that Facebook doesn’t give a flying FarmVille fig about protecting users from abuse.
What it cares about is collecting data it can sell to advertisers, he said in the editorial, and the company can’t be trusted to regulate itself:
What I saw from the inside was a company that prioritized data collection from its users over protecting them from abuse. As the world contemplates what to do about Facebook in the wake of its role in Russia’s election meddling, it must consider this history. Lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t.
Speaking of FarmVille, all of this is of course relevant to those of us who’ve taken Facebook quizzes or played games hosted on the platform. But you needn’t have done so yourself: if your friends played games or took personality quizzes like CA’s thisisyourdigitallife personality test, your data was more than likely pulled in and used to do things without your permission, such as what CA did: it created profiles of individual US voters in order to target them with personalized political ads.
That was enabled by a previous feature called friends permission. From 2007 on up to 2014, Facebook allowed developers to access the personal data of friends of people who used apps on the platform, without their knowledge or express consent. Parakilas doesn’t know how many developers got access to friends permission data before Facebook cut the umbilical cord, but he told The Guardian that he believes it’s in the tens or maybe even hundreds of thousands of developers.
Facebook got a 30% cut of transactions made through those games and apps. Obviously, there was little financial incentive for it to cut them off.
Meanwhile, Facebook CEO Mark Zuckerberg has apologized for this “breach of trust.” That breach has gone on for years, mind you: CA’s misuse of user data was discovered in 2015, yet it took until Friday for it to suspend the firm, its parent company, and founder/whistleblower Wylie from accessing the platform.
Zuckerberg said in a Facebook post on Wednesday that he’s “working to understand exactly what happened and how to make sure this doesn’t happen again.”
Some of the steps he’s pledged include investigating all apps that had access to “large amounts of data” before the platform reduced access in 2014. He said Facebook will conduct a “full audit” of any found to be up to suspicious activity. Developers who misused personally identifiable information (PII) will be banned, and affected users will be notified – including those whose information was used by CA.
Developers’ access will also be further pulled back. For example, if you haven’t used the platform within three months, access to your data will be cut. Also, the data given to apps when you sign in will be limited to name, profile photo, and email address.
Facebook is also going to un-bury the tool to see and edit what apps have access to your data: the tool is going to the top of Newsfeed.
Fine. But it leaves many unanswered questions about Facebook’s lack of developer vetting; its failure in doing even a single audit of developers, at least during Parakilas’ tenure; and its failure to verify that CA deleted data when it claimed to.
One thing to bear in mind is that during his time at Facebook, Parakilas said, the company was gung-ho to sign up developers to its platform, and access to this valuable user data was one carrot it dangled. In fact, he says that at least at the beginning of his tenure, he was told that in order to ban any developer’s app, he had to get Zuckerberg’s personal approval.
If you don’t trust Facebook as far as you can throw it, none of its proposed steps to protect user data will likely satisfy. Check out this article for more ways to protect your data, up to and including deleting your Facebook profile entirely.
Is it time? Even Facebook admitted recently that social media can be bad for you.
We knew it was bad, but this bad?
Please do let us know if you’re going to pull the plug.