Another whistleblower, this time a former Facebook insider, has told British MPs that covert, “utterly horrifying” data harvesting has been routine at the platform, that we’re likely talking about hundreds of millions of Facebook users affected by apps such as the one Cambridge Analytica (CA) got data from, and that Facebook has a history of hiding its head in the sand, likely frightened of being found liable for what it’s enabled developers to do with user data.
The first whistleblower was CA founder Christopher Wylie, who worked with Cambridge University professor Aleksandr Kogan to obtain the data used to create a tool that could be used to profile voters and influence the 2016 presidential election and Brexit campaign. Kogan has been linked to previously undisclosed Russian affiliations.
The latest whistleblower is Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012. He told The Guardian that he’s disheartened by recent disclosures about how the company Global Science Research scraped tens of millions of Facebook profiles and gave the data to CA. CA is a voter-profiling company whose “psychographic profiling” was funded and used by conservative investors.
Facebook got itself into this mess, Parakilas told British MPs on Wednesday. He tried to warn the company: in fact, he shared his concerns with Facebook execs who were “among the top five” in the company, he said while giving evidence to Parliament’s Digital, Culture, Media and Sport committee.
(The Guardian has provided a summary of salient points from his evidence, given via video link.)
CA wasn’t an anomaly, Parakilas said. Other firms have likely exploited the same terms as CA did, and they likely took advantage of the fact that Facebook was hands-off. Parakilas got the impression that Facebook seemed to fear that an investigation into all the unvetted developers who’ve been given access to Facebook servers and the user data therein could lead to liability over policies or laws being broken in data breaches, he said.
If Facebook didn’t know, if it didn’t investigate what developers were up to, it could then claim that it was just a platform and hence not liable, Parakilas said. That sounds familiar: the “we’re just a platform, not a publisher” refrain was heard repeatedly during at least the start of the furor over fake news being disseminated on Facebook.
He had warned execs that this lax approach to security and utter lack of developer oversight could lead to a major breach like the CA one that exploded over the weekend, Parakilas told The Guardian:
My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data.
When asked about what kind of control Facebook had over data accessed by outside developers, Parakilas said “none”:
Zero. Absolutely none. Once the data left Facebook servers there was not any control, and there was no insight into what was going on.
In fact, Parakilas always assumed “there was something of a black market” for Facebook data passed to external developers. When he told other execs that the company should proactively “audit developers directly and see what’s going on with the data,” execs told him to back off: he probably wouldn’t like what he’d see if he overturned that rock.
The gist of one Facebook executive’s response, he said:
Do you really want to see what you’ll find?
Parakilas’s interpretation of the comment:
Facebook was in a stronger legal position if it didn’t know about the abuse that was happening.
They felt that it was better not to know. I found that utterly shocking and horrifying.
Parakilas’ attempts to warn Facebook included a PowerPoint presentation he gave to senior execs in 2012. It included what he said was “a map of the vulnerabilities for user data on Facebook’s platform”.
I included the protective measures that we had tried to put in place, where we were exposed, and the kinds of bad actors who might do malicious things with the data. On the list of bad actors I included foreign state actors and data brokers.
Fed up by the lack of change, he left the company in 2012. Since then, he’s kept his concerns to himself. But that changed once Facebook lawyers gave testimony to the US Congress late last year about Russia’s attempt to sway the 2016 presidential election.
They treated it like a PR exercise. They seemed to be entirely focused on limiting their liability and exposure rather than helping the country address a national security issue.
In November, he wrote a scathing opinion piece published by the New York Times. In it, he said that Facebook doesn’t give a flying FarmVille fig about protecting users from abuse.
What it cares about is collecting data it can sell to advertisers, he said in the editorial, and the company can’t be trusted to regulate itself:
What I saw from the inside was a company that prioritized data collection from its users over protecting them from abuse. As the world contemplates what to do about Facebook in the wake of its role in Russia’s election meddling, it must consider this history. Lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t.
Speaking of FarmVille, all of this is of course relevant to those of us who’ve taken Facebook quizzes or played games hosted on the platform. But you needn’t have done so yourself: if your friends played games or took personality quizzes like CA’s thisisyourdigitallife personality test, your data was more than likely pulled in and used to do things without your permission, such as what CA did: it created profiles of individual US voters in order to target them with personalized political ads.
That was enabled by a previous feature called friends permission. From 2007 on up to 2014, Facebook allowed developers to access the personal data of friends of people who used apps on the platform, without their knowledge or express consent. Parakilas doesn’t know how many developers got access to friends permission data before Facebook cut the umbilical cord, but he told The Guardian that he believes it’s in the tens or maybe even hundreds of thousands of developers.
Facebook got a 30% cut of transactions made through those games and apps. Obviously, there was little financial incentive for it to cut them off.
Meanwhile, Facebook CEO Mark Zuckerberg has apologized for this “breach of trust.” That breach has gone on for years, mind you: CA’s misuse of user data was discovered in 2015, yet it took until Friday for it to suspend the firm, its parent company, and founder/whistleblower Wylie from accessing the platform.
Zuckerberg said in a Facebook post on Wednesday that he’s “working to understand exactly what happened and how to make sure this doesn’t happen again.”
Some of the steps he’s pledged include investigating all apps that had access to “large amounts of data” before the platform reduced access in 2014. He said Facebook will conduct a “full audit” of any found to be up to suspicious activity. Developers who misused personally identifiable information (PII) will be banned, and affected users will be notified – including those whose information was used by CA.
Developers’ access will also be further pulled back. For example, if you haven’t used the platform within three months, access to your data will be cut. Also, the data given to apps when you sign in will be limited to name, profile photo, and email address.
Facebook is also going to un-bury the tool to see and edit what apps have access to your data: the tool is going to the top of Newsfeed.
Fine. But it leaves many unanswered questions about Facebook’s lack of developer vetting; its failure in doing even a single audit of developers, at least during Parakilas’ tenure; and its failure to verify that CA deleted data when it claimed to.
One thing to bear in mind is that during his time at Facebook, Parakilas said, the company was gung-ho to sign up developers to its platform, and access to this valuable user data was one carrot it dangled. In fact, he says that at least at the beginning of his tenure, he was told that in order to ban any developer’s app, he had to get Zuckerberg’s personal approval.
If you don’t trust Facebook as far as you can throw it, none of its proposed steps to protect user data will likely satisfy. Check out this article for more ways to protect your data, up to and including deleting your Facebook profile entirely.
Is it time? Even Facebook admitted recently that social media can be bad for you.
We knew it was bad, but this bad?
Please do let us know if you’re going to pull the plug.
Coleen S.
I deleted my account, along with Instagram, prior to hearing Zuckerberg’s response – he appears more afraid of losing money than trust, and may have knowingly violated his 2011 agreement with the FTC
James
Zuckerberg sells people’s private data to corporations and gets rich. Julian Assange gives away private data on corporations to people for free and has to seek asylum in an embassy.
Mahhn
As if there was any other reason for FB to be in business. If you didn’t know you were the product you’re just a fool. Same with Free Email – yeah, you have a Gmail, or Yahooo, or Outlook/Hotmail – you gave permission for them to see all your content for the sole purpose of marketing. Yes you did. I read the Agreements, you should too.
Marty
I never signed up for an account on social media the likes of Facebook, Instagram, Twitter… Today I face the strict choice that I made for myself never to allow data-hungry corporations to nibble my private life, and not having access to their services in return – it’s not always easy – but is this so much of a loss? Over the last months it’s becoming increasingly satisfying to have gone this direction.
KathyB
Imagine what would happen if Facebook were a financial institution. It would a huge nightmare for the FI, regulators and law enforcement.
I deleted my FB and Instagram accounts on Tuesday and seriously don’t miss either one bit after 10 years of use. Yes, the cat is already out of bag & they have my data regardless, however, no longer will they be able to scrape new data from my profile.
At the end of the day it is about personal risk tolerance and my cybersec brain conjured up all kinds of scary scenarios based on not only FB data but data from other breaches. Parse it all together and YIKES!
H.
From the beginning it was clear that fb is all about selling all of the data it collects without any anonymization. Not only what has been known about zuck himself but also that if you read the terms and conditions shouldn’t have left you unaware of what was going to happen. Then again there were a lot of (expecatable) bugs that revealed data otherwise protected by privacy settings. But all of this is not the point at all. If you try and gather all available information about people you would see as friends or not on fb then you get an impression what happened when fb collected those data and combined it with data of people who do not even used fb. So fb could be seen as a major spy company (besides the ones founded by a certain government) on any available personal data with huge ressources made available by its unaware users as is. No sane law (and even not the GDPO) can forbid unaware users to hand out their personal data that will be collectected and combined but we are able now to forbid companies and held them responsible for collecting data of users without their consent and this makes a huge difference in this case. This will of course not solve all of the problems but there will be a chance that you can held someone resonsible for irresponsibility.
chi
Even if you never created a FB account they use what friends and family have to build a profile on you. I remember when I first created my account, I wasn’t even done putting in all my info and they were suggesting family members as friends. the beginning stages of facebook was way better than what it has turned into.
Stevansky
No one complained when the Obama campaign did the same thing, setting off internal alarms, and prompting a meeting with Zuck where said “if it had been anyone else they would have stopped them.” Though they claim that their targets opted in, the same can’t be said for millions of friends, and friends of friends who were harvested in their mining. Fast forward to now and suddenly it’s a “problem”. If it’s a problem now, it was a problem then.