Earlier this week, the lawyer representing Syrian refugee Anas Modamani challenged Facebook on whether the company had the technical capability to detect a specific selfie and prevent it from being spread further.
It all started with an innocent selfie, which was taken by Modamani in 2015 and features him standing next to a smiling German chancellor Angela Merkel:
Anas Modamani's selfie with Angela Merkel has appeared in several fake news reports. He's now suing Facebook. https://t.co/7NUe5TDVtq pic.twitter.com/DTYGukGhzN
— The New York Times (@nytimes) February 7, 2017
After the photo went viral, it began appearing alongside statements claiming, according to Gizmodo, that Modamani was a “terror suspect”. Meanwhile, a number of fake news reports on Facebook falsely linked him to terror attacks in Brussels and Berlin.
Facebook defended itself in court against claims that it does too little to counter abusive content on its platform. One of its lawyers claimed:
There are billions of postings each day. You want us to employ a sort of wonder machine to detect each misuse. Such a machine doesn’t exist.
Does such a machine exist?
As Forbes points out, Facebook is a technology company with a heavy investment in deep learning and filtering technologies. It has repeatedly been criticized by free-speech advocates for its aggressive content removal, with these concerns raised by cartoonist Jerm just the latest example. Forbes questions:
How much truth is there that the number of daily posts and limitations of current technology mean that it really is impossible for social media platforms to better enforce their bans on hate speech, harassment and threats of violence?
Let’s explore.
What is deep learning?
Before we begin, if you’re not already familiar with deep learning, it’s branch of machine learning where computers can use algorithms to teach themselves to model abstract ideas. After feeding the computer a learning algorithm, programmers then train it using hundreds of thousands of images or speech samples.
This interesting Fortune blog post, which also uses the term “deep neural networks” since deep learning has its roots in neural networks, explains how the computer is then allowed to “figure out for itself how to recognize the desired objects, words, or sentence”.
The article also touches on the concepts of supervised and unsupervised learning, noting that most solutions today use supervised learning, where the computer is trained on labelled data. With unsupervised learning, however, it is simply asked to look for recurring patterns in unlabelled data.
Unsupervised learning, it notes, still remains “uncracked”:
Researchers would love to master unsupervised learning one day because then machines could teach themselves about the world from vast stores of data that are unusable today.
Facebook and deep learning
Facebook is not unfamiliar with deep learning. An article posted on its website last June describes its DeepText technology as “a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousands posts per second, spanning more than 20 languages”.
According to an article in Motley Fool, Facebook plans to use this technology to match users with material that will be of interest to them. It reports that this technology will also reportedly:
help match users with advertisers, weed out prohibited content, rank search results and identify trending topics.
It seems that Facebook will soon have some technology ready to detect misuse, though we have yet to see how effective it will be. It does need to be trained and we don’t know if it can learn enough and fast enough to keep with online abuse.
Amazon, Apple, Google, IBM, Microsoft and other technology big hitters also all have their own individual deep learning initiatives. Wouldn’t it be great if they could all pool resources to help fight online harassment?
Maybe they’ll do this under the Partnership on AI program. After all, they do all agree that
… artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.
With many eggheads focusing on this being better than one, let’s see if they can get a solution to this growing problem cracked.
FreedomISaMYTH
easy solution:
1. delete FB
2. enjoy the RL
Nobody_Holme
The tech to fingerprint videos and autodelete them for copyright reasons has existed for years, in large part thanks to the MPAA throwing money at it. It’s actually pretty cool tech. Facebook have in the past used it, although I have no idea if they currently have their own or a licence to someone else’s.
Regardless, when YouTube can run that on their video upload stream, Facebook can’t argue they can’t handle doing that to their image upload stream and think it’ll stand up in court…