Site icon Sophos News

Facebook spars with researcher who says he found “Instagram’s Million Dollar Bug”

A spat erupted last week between Facebook and a security researcher who reported a vulnerability in the infrastructure behind its Instagram service.

In the wake of having reported the bug, Wesley Wineberg, a contract employee of security company Synack, accused Facebook of trying to threaten his job and intimidate him.

Facebook says, well, a number of things: that Wineberg was one of several to discover the vulnerability, that the company thanked him and offered him $2500 (as is “standard”, it says), that Wineberg wanted more than that, and that the researcher then crossed the line of responsible, ethical bug reporting to “rummage” through data.

The starting payout for bugs in Facebook’s bounty program is $500.

In an extensive post about the situation, Facebook chief security officer Alex Stamos on Thursday wrote that Facebook offered to pay Wineberg $2500 “despite this not being the first report of this specific bug.”

Up to the point when Facebook offered him $2500, everything Wineberg did was “appropriate, ethical, and in the scope of our program,” Stamos says.

Both parties agree on one thing: from there, it went downhill fast.

The way Stamos tells it, Wineberg used the flaw to “rummage around” for useful information, which he found – in spades.

Wineberg on Thursday said in a post on his personal blog that he had found weaknesses in the Instagram infrastructure that allowed him to access source code for “fairly” recent versions of Instagram; SSL certificates and private keys for Instagram.com; keys used to sign authentication cookies; email server credentials; and keys for more than a half-dozen critical other functions, including iOS and Android app signing keys, iOS push notification keys, and the APIs for Twitter, Facebook, Flickr, Tumblr and Foursquare.

In addition, the researcher said he’d managed to access employee accounts and passwords (some of which he said were “extremely weak”), and had access to Amazon buckets storing user images and other data.

He hit the jackpot, Wineberg said, and not just any piddling $2500 payout’s worth.

In fact, his post was titled “Instagram’s Million Dollar Bug”: a reference to Facebook having said in the past that:

If there's a million-dollar bug, we will pay it out.

From Wineberg’s post:

To say that I had gained access to basically all of Instagram's secret key material would probably be a fair statement. With the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member. While out of scope, I would have easily been able to gain full access to any user's account, private pictures and data. It is unclear how easy it would be to use the information I gained to then compromise the underlying servers, but it definitely opened up a lot of opportunities.

Between 21 October and 1 December, Wineberg would find what he believed were three different issues, which he reported in three installments.

They eventually led him to all those Instagram keys, but that raised warning flags at Facebook, which responded quite differently than it had to the initial bug report.

In fact, Stamos said, issues 2 and 3 were where Wineberg crossed the line.

He found Amazon Web Service (AWS) API Keys that he used to access an Amazon Simple Storage Service (S3) bucket and download non-user Instagram technical and system data, Stamos said.

But this use of AWS keys is just “expected behavior”, Stamos said, and Wineberg should have kept his hands out of that cookie jar:

The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behavior by Wes.

Wineberg mentioned publishing his findings. Facebook was not pleased.

So Stamos reached out to Jay Kaplan, the CEO of Synack – in spite of Wineberg doing all this on his own time, not on Synack’s dime – to tell him that writing up the initial bug was OK, but that exfiltrating data and calling it research was not OK.

That’s when Stamos dropped a reference to lawyers, saying that he “wanted to keep this out of the hands of the lawyers” but that he wasn’t sure if this was something he needed to go to law enforcement over.

This is what Stamos wanted, from Wineberg’s telling of it:

Wineberg says that he couldn’t find anything in Facebook’s responsible disclosure policy that specifically forbade what he’d done after he initially found the remote code execution (RCE) vulnerability.

What would have clarified matters, he said, would have been specificity along the lines of, say, Microsoft’s bug reporting policy, which explicitly prohibits “moving beyond ‘proof of concept’ repro steps for server-side execution issues (i.e. proving that you have sysadmin access with SQLi is acceptable, running xp_cmdshell is not).”

From his post:

Despite all efforts to follow Facebook's rules, I was now being threatened with legal and criminal charges, and it was all being done against my employer (who I work for as a contractor, not even an employee). If the company I worked for was not as understanding of security research I could have easily lost my job over this. I take threats of criminal charges extremely seriously, and so have already confirmed with legal counsel that my actions were completely lawful and within the requirements specified by Facebook's Whitehat program.

As of Friday afternoon, Stamos was still hashing it all out with commenters on his post, many of whom said that the “expected behavior” rationale for dismissing Wineberg’s findings was thin.

For his part, Stamos said that Facebook will look at making its policies more explicit and try to be clearer about what it considers ethical behavior.

But Facebook still doesn’t condone what Wineberg did.

From Stamos’s post:

Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk.

Readers, who do you think is in the right, here? Please share your thoughts in the comments section below.

Image of bug courtesy of Shutterstock.com

Exit mobile version