Skip to content
Naked Security Naked Security

Facebook spars with researcher who says he found “Instagram’s Million Dollar Bug”

What started as a normal-enough bug report spiraled into talk of lawyers and reaching out over the researcher's head to drag his contract employer in.

A spat erupted last week between Facebook and a security researcher who reported a vulnerability in the infrastructure behind its Instagram service.

In the wake of having reported the bug, Wesley Wineberg, a contract employee of security company Synack, accused Facebook of trying to threaten his job and intimidate him.

Facebook says, well, a number of things: that Wineberg was one of several to discover the vulnerability, that the company thanked him and offered him $2500 (as is “standard”, it says), that Wineberg wanted more than that, and that the researcher then crossed the line of responsible, ethical bug reporting to “rummage” through data.

The starting payout for bugs in Facebook’s bounty program is $500.

In an extensive post about the situation, Facebook chief security officer Alex Stamos on Thursday wrote that Facebook offered to pay Wineberg $2500 “despite this not being the first report of this specific bug.”

Up to the point when Facebook offered him $2500, everything Wineberg did was “appropriate, ethical, and in the scope of our program,” Stamos says.

Both parties agree on one thing: from there, it went downhill fast.

The way Stamos tells it, Wineberg used the flaw to “rummage around” for useful information, which he found – in spades.

Wineberg on Thursday said in a post on his personal blog that he had found weaknesses in the Instagram infrastructure that allowed him to access source code for “fairly” recent versions of Instagram; SSL certificates and private keys for Instagram.com; keys used to sign authentication cookies; email server credentials; and keys for more than a half-dozen critical other functions, including iOS and Android app signing keys, iOS push notification keys, and the APIs for Twitter, Facebook, Flickr, Tumblr and Foursquare.

In addition, the researcher said he’d managed to access employee accounts and passwords (some of which he said were “extremely weak”), and had access to Amazon buckets storing user images and other data.

He hit the jackpot, Wineberg said, and not just any piddling $2500 payout’s worth.

In fact, his post was titled “Instagram’s Million Dollar Bug”: a reference to Facebook having said in the past that:

If there's a million-dollar bug, we will pay it out.

From Wineberg’s post:

To say that I had gained access to basically all of Instagram's secret key material would probably be a fair statement. With the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member. While out of scope, I would have easily been able to gain full access to any user's account, private pictures and data. It is unclear how easy it would be to use the information I gained to then compromise the underlying servers, but it definitely opened up a lot of opportunities.

Between 21 October and 1 December, Wineberg would find what he believed were three different issues, which he reported in three installments.

They eventually led him to all those Instagram keys, but that raised warning flags at Facebook, which responded quite differently than it had to the initial bug report.

In fact, Stamos said, issues 2 and 3 were where Wineberg crossed the line.

He found Amazon Web Service (AWS) API Keys that he used to access an Amazon Simple Storage Service (S3) bucket and download non-user Instagram technical and system data, Stamos said.

But this use of AWS keys is just “expected behavior”, Stamos said, and Wineberg should have kept his hands out of that cookie jar:

The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behavior by Wes.

Wineberg mentioned publishing his findings. Facebook was not pleased.

So Stamos reached out to Jay Kaplan, the CEO of Synack – in spite of Wineberg doing all this on his own time, not on Synack’s dime – to tell him that writing up the initial bug was OK, but that exfiltrating data and calling it research was not OK.

That’s when Stamos dropped a reference to lawyers, saying that he “wanted to keep this out of the hands of the lawyers” but that he wasn’t sure if this was something he needed to go to law enforcement over.

This is what Stamos wanted, from Wineberg’s telling of it:

  • Confirmation that he hadn’t made any vulnerability details public.
  • Deletion of all data retrieved from Instagram systems.
  • Confirmation that he hadn’t accessed any user data.
  • An agreement to keep all findings and interactions private, and not publish them at any point (contrary to Stamos’s assertion that Facebook was OK with Wineberg writing up the initial vulnerability).

Wineberg says that he couldn’t find anything in Facebook’s responsible disclosure policy that specifically forbade what he’d done after he initially found the remote code execution (RCE) vulnerability.

What would have clarified matters, he said, would have been specificity along the lines of, say, Microsoft’s bug reporting policy, which explicitly prohibits “moving beyond ‘proof of concept’ repro steps for server-side execution issues (i.e. proving that you have sysadmin access with SQLi is acceptable, running xp_cmdshell is not).”

From his post:

Despite all efforts to follow Facebook's rules, I was now being threatened with legal and criminal charges, and it was all being done against my employer (who I work for as a contractor, not even an employee). If the company I worked for was not as understanding of security research I could have easily lost my job over this. I take threats of criminal charges extremely seriously, and so have already confirmed with legal counsel that my actions were completely lawful and within the requirements specified by Facebook's Whitehat program.

As of Friday afternoon, Stamos was still hashing it all out with commenters on his post, many of whom said that the “expected behavior” rationale for dismissing Wineberg’s findings was thin.

For his part, Stamos said that Facebook will look at making its policies more explicit and try to be clearer about what it considers ethical behavior.

But Facebook still doesn’t condone what Wineberg did.

From Stamos’s post:

Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk.

Readers, who do you think is in the right, here? Please share your thoughts in the comments section below.

Image of bug courtesy of Shutterstock.com

16 Comments

Facebook should pay up, a decent amount for a serious bug.

However, exfiltrating data seems wrong, especially if it was only using a key discovered with the first vulnerability and not even to demonstrate a new vulnerability. I get the feeling the follow up actions were just taken to demonstrate the level of impact Instagram could have been facing from the vulnerability if exploited, and in so demonstrate the $2500 was token.

Even if they had just offered $50,000 the entire story would be different, and what is this amount to Facebook in the face of a potentially massive data breach? Cheapos.

Reply

Wineberg did demonstrate new vulnerabilities in his third finding. He lists 7 additional vulnerabilities in his blog (see link in the article above).

Reply

I am “sort of” on Facebook’s side here (excluding the idiotic legal threats), though they really should have made their bug bounty terms a lot clearer.

If I understand this correctly, here’s an analogy. Say you run a random web site, and you store user passwords in the clear in a database with no encryption or salting/hashing or anything. I, a security researcher, find a way to get into this database and get everybody’s passwords. You say, fine, that’s a security vulnerability, here’s a small bug bounty. I try to get more by pointing out how much I can do with user passwords, including trying them out on banking sites to see if some percentage of users reuse passwords, and I can withdraw all their money. That’s not really part of the bug the bounty is being paid for, though. Bounties are supposed to just be rewards for responsible disclosure, not based on what you could do with the vulnerability you’ve discovered.

Reply

That’s… not how any of this works. First off, your analogy is terrible. If a researcher takes passwords stored in the clear and then attempts to *use them on other sites*, that’s not looking for bugs in that system, that’s flat out attacking users. Which no security researcher in their right mind would agree is ethical.

Here’s the thing, bug bounties should be required for any company that stores Personally Identifiable Information (PII). Period. And the terms of the bug bounty should be pretty much “don’t steal anything, don’t violate user privacy, and don’t take anything down”. Because otherwise how are companies going to know they’re vulnerable. If companies with dedicated red and blue teams like Facebook don’t catch it, what hope is there for smaller companies? If you store my data, I want to know it’s secure.

Do you honestly think the bad guys are going to stop based on bug bounty “rules”? If they want that data, they’ll get it. Better to have an ethical researcher find the hole and report it (which no one can argue happened in this case) than to have blackhats steal the data and sell it on the open market. You think the iCloud hack was bad? This would have made that a blip on the radar if it had been discovered by someone wearing a darker colored hat.

Reply

He didn’t say he would use the passwords to do that. He said he would argue that the vulnerability of getting user passwords was not a “small” issue since it could trivially lead to so many other privacy or financial intrusions, and so should result in a larger bounty. It is fair to make that argument without acting on it.

For example, if someone offers to pay a small bounty because all you got was one password for their system, but it was the “root” password, then they are not fairly paying out the value of that bug.

Reply

This is like a “transgender toilet access” argument… :) , where the only argument that is viable is “intent” and at the least ethics. Facebook “owned” the weakness in the code, but gave invitation to storm the castle, meanwhile not expecting anyone to make all of the way to the king’s throne. Fire Stamos and give Wineberg his job…seems somebody actually knows how to work for their money and get the job done.

Reply

to add another thought… why is it “ethical” for Facebook to promote bug bounty when eventual access possibly leads to the breach of critical data (the non-public data), and then, approach a non-entity in the situation, to simulate a financial threat, to the party of whom Facebook gave permission to find a way to create the breach in the first place?
Here’s the problem of Facebook’s part…they admit others reported the bug beforehand…but then they didn’t fix the bug immediately…and it seems their own security team had no clue, or at least didn’t report it…that someone had been eating their porridge and sleeping in their bed.

Reply

This guy has stepped over the line in my opinion. He found a serious security flaw, but then began exploiting it.

He is trying to blackmail Facebook into paying out more.

Reply

$2,500, not $2500. Secondly, Facebook has no room to preach to others about “following the rules” when it refuses to enforce its own.

Reply

It is really terrible that FBook is going to go broke because they might have to pay the man who took their challenge and succeeded. FBook is a fine institution and will be missed.

Reply

It’s like arguing that once you find your neighbor’s door unlocked and call to tell him, you now feel free to enter his house and go through his stuff because he didn’t offer to reward you and didn’t seem that worried about it.

While you can argue or lobby against the amount paid out, it is ultimately up to the company how much they want to pay up, and taking rogue actions when you don’t like the amount they gave you is just immoral and unacceptable. The agreement not to prosecute when a hack is found is balanced by the morals of how you handle the information once you find it, plain and simple.

Using the hack to actually perform additional breaches is a criminal act. There is no fair justification for doing it.

Reply

Seriously? Facebook should have taken the time to protect their house. Crying about a researcher gaining access to things he shouldn’t ought is shifting the blame from the responsible party…Stamos himself. I find it appalling that the people charged with protecting data punish those who point out flaws.

At least this time Facebook had an open, communicative white hat to berate. What about the Ukrainian NSA plant (or proxy) that has the same access but hasn’t spoken up?

Reply

The issue here is that $2500 is not enough, $500 is a joke and $2500 is not much better. The bug had the potential for the company to be financially affected to the tune of millions. A multi billion dollar company shouldn’t be such a tightarse.

Reply

I find it more unethical that the CISO of a company as large as Facebook decided to contact the CIO of the person’s company when it had nothing to do with the company at all. It would be one thing if he was working at some sort of project at work that related to the security of Instragram/Facebook but this was completely on his personal time. Did Stamos call up his parents afterwards to tell them their son was getting into trouble?

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!