Wondering if your boyfriend really, really did delete that photo of you naked, wearing a sports championship medal, as he said he would (but which he didn’t, in the case of the Richmond Football Club’s Nathan Broad and the image of a young woman with the sports memorabilia on her bare chest that he told her he’d delete but instead shared)?
Facebook wants you to stop worrying about your nudes being shared without your consent like that. It wants you to get to that worry-free state by sending it your nude photos.
WHAAAA????
Stop, breathe. It actually makes sense: Facebook hasn’t given much detail, but from what little has been shared it sounds like it’s planning to use hashes of our nude images, just like law enforcement uses hashes of known child abuse imagery.
A hash is created by feeding a photo into a hashing function. What comes out the other end is a digital fingerprint that looks like a short jumble of letters and numbers. You can’t turn the hash back into the photo but the same photo, or identical copies of it, will always create the same hash.
So, a hash of your most intimate picture is no more revealing than this:
48008908c31b9c8f8ba6bf2a4a283f29c15309b1
Since 2008, the National Center for Missing & Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images, provided by ISPs, that enables companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images or to actually pry open people’s private messages.
The hash originally used to create unique file identifiers was MD5, but Microsoft at one point donated its own PhotoDNA technology to the effort.
PhotoDNA creates a unique signature for an image by converting it to black and white, resizing it, and breaking it into a grid. In each grid cell, the technology finds a histogram of intensity gradients or edges from which it derives its so-called DNA. Images with similar DNA can then be matched.
Given that the amount of data in the DNA is small, large data sets can be scanned quickly, enabling companies including Microsoft, Google, Verizon, Twitter, Facebook and Yahoo to find needles in haystacks and sniff out illegal child abuse imagery. It works even if the images have been resized or cropped.
Mind you, we don’t know if that’s the technology Facebook’s planning to use. It’s announced a pilot program with four countries—the UK, the US, Australia and Canada—in which people will typically be advised to send the photos to themselves via Messenger.
Julie Inman Grant, Australia’s e-safety commissioner, whose office is working with Facebook, told ABC News in Australia that sending photos via Messenger would be enough to enable Facebook to take action to prevent any re-uploads, without the photo being stored or viewed by employees.
Facebook says that it won’t be storing nude pictures but will use photo-matching technology to tag the images after they’re sent via its encrypted Messenger service. Then, Inman Grant said, “if somebody tried to upload that same image, which would have the same digital footprint or hash value, it will be prevented from being uploaded”.
The scheme’s being trialed first in Australia and will soon be tested in Britain, the US and Canada. At present, Facebook users can report photos of themselves that have already been posted nonconsensually or maliciously. Once the images are flagged, Facebook’s in-house teams review them, using hashing to prevent them from being re-uploaded.
Under the pilot scheme, users can act preemptively by notifying safety organizations working with Facebook about specific photos.
True, initially, you do have to hand over the photo in question in order to create the hash. But after that, the hash will be able to help the online platform more or less instantly answer the question “Do I know that photo?”—and to block its reposting—without you having to send the photo again.
We’d like to see a lot more detail from Facebook on this. For example, what safeguards are in place to ensure that people can’t take any old picture they want—a non-porn publicity photo, for example—and send it in, under the false premise that it’s a nude and that it’s a photo they themselves have the rights to have expunged from social media circulation?
The few details that have been revealed about this program look promising, but Facebook needs to put some flesh on its bones. If it responds to my questions, I’ll let you know.
Updates as of 2017-11-10:
Facebook have since confirmed how the pilot program works in a blog post.
Here’s how it works:
- Australians can file a report on the eSafety Commissioner’s official website.
- The eSafety Commissioner’s office notifies Facebook, but doesn’t have access to the photo.
- To identify the image to Facebook, people send it to themselves on Messenger.
- A member of Facebook’s Community Operations team reviews and hashes the image.
- Facebook stores the hash — not the photo — in its database to prevent future uploads.
- The person deletes the photo from Messenger and Facebook deletes it from its servers.
- Facebook prevents image uploads that match the stored hash from being posted or shared.
What the recent blog post doesn’t specify, Motherboard confirms, is that images sent for review aren’t blurred out, as originally explained by a Facebook spokesperson.
Facebook’s Chief Security Officer Alex Stamos tweeted:
https://twitter.com/alexstamos/status/928741990614827008