“It’s raining! I need a ride!” somebody might wail on Messenger.
“Oh yea? I just got out of a taxi!” a friend might respond.
Wouldn’t it be nice (and, of course, revenue-producing) if Facebook’s algorithms could understand that the ride-needer needs a taxi, that he would probably say yes if Messenger prompted him to connect with Uber, and that his friend does not need a taxi since she just got out of one?
It can.
That’s exactly the scenario that Facebook’s trained its artificial intelligence (AI) language-processing to handle. Facebook announced its newest AI system, called DeepText, on Wednesday.
Facebook says that DeepText is a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of thousands of posts per second, spanning more than 20 languages.
Why deep learning?
For a machine to understand text, it has to figure out many things. There’s the general classification – Facebook gives the example of a post about basketball – but then too there’s the understanding of meaningful contextual information, like players’ names or stats from a game.
To approximate how humans interpret text, you also have to teach a machine to understand things like slang and disambiguation. For example, when you post about how much you like blackberry, are you talking fruit or device?
Meanwhile, it all has to scale: Tech Crunch reports that there are some 400,000 new stories and 125,000 comments on public posts shared every minute on Facebook.
DeepText, the tool to turn all those globs of text into something intelligible, is based on deep learning. Put in terms of Wikipedia-ese, that’s a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations.
One example of the use of deep learning: In traditional natural language processing, words are converted into a format that a computer algorithm can learn. The word “brother” might be assigned an integer ID such as 4598, while the word “bro” becomes another integer, like 986665. But that only works if each word is spelled exactly right.
Deep learning instead uses “word embeddings,” a mathematical concept that preserves the semantic relationship among words. That enables a computer to recognize that “brother” and “bro” are actually pretty close in space, and it allows for capture of the deeper semantic meanings.
Word embeddings can actually span languages to understand semantics. For example, “happy birthday” and “feliz cumpleaños” should be very close to each other in the common embedding space, Facebook explains. DeepText can therefore map words and phrases into a common embedding space to build models that are language-agnostic.
From Facebook’s announcement:
The community on Facebook is truly global, so it’s important for DeepText to understand as many languages as possible. Traditional NLP techniques require extensive preprocessing logic built on intricate engineering and language knowledge. There are also variations within each language, as people use slang and different spellings to communicate the same idea.
Using deep learning, we can reduce the reliance on language-dependent knowledge, as the system can learn from text with no or little preprocessing. This helps us span multiple languages quickly, with minimal engineering effort.
For a deeper dive – no pun intended – into the deep learning ideas that DeepText was built on, check out one of the papers from Facebook AI Research’s Ronan Collobert and Yann LeCun.
This is all to help us out
Facebook is already using DeepText models to push people toward using Facebook tools that might suit their purpose better than a plain old post. One example: someone might want to sell their bike for $200.
DeepText is able to detect that the post is about selling something, can extract information such as what’s being sold, along with its price, and then prompt the user to use specific tools built for that purpose to make the transaction easier to do on Facebook.
And yes, of course it’s not altruism: pumping up engagement with the platform is Facebook’s bread and butter.
Beyond understanding posts better so as to puzzle out intent, sentiment, and entities (e.g., people, places, events), Facebook can also figure out what it calls mixed content: i.e., images combined with text. That enables it to promote better quality content and demote a whole lot of fluff, such as the squealing “I LOVE YOU!!!” posts on celebrities’ pages.
It gives another example: somebody with a new baby posts a photo with the accompanying text “Day 25.” Working with its visual content understanding teams, it’s better able to figure out that this is clearly a family-oriented post.
Then again, it can also automate the removal of junk, like spam.
It’s about surfacing the content we want to see, Facebook says, and drowning the content we can’t stand.
If Facebook’s human-like ability to understand your every post gives you the creeps, that’s just one more reason to be careful about what you share. Even if you think you’re just sharing with friends, keep in mind that Facebook’s always listening in.
We’ve got tips on making your account safer, and you can always join us on Facebook to keep tabs on what the social media megalith is up to.
And yes, of course Facebook will hear, and understand, pretty much all of it!
Reaader
“If Facebook’s human-like ability to understand your every post gives you the creeps, that’s just one more reason to be careful about what you share. Even if you think you’re just sharing with friends, keep in mind that Facebook’s always listening in.”
Deeptext apparently makes the above advice more apt than ever.
Yikes. Scary stuff. How does Deeptext distinguish from something a person wrote or said in jest from something (s)he wrote or said in earnest? Do qualifiers (e.g., “just kidding” or “lol”) help Deeptext to avoid misinterpreting a person’s intent?
Steve
This could be rather frightening, but there is still hope of one thing that could throw the proverbial monkey wrench into the works: AUTOCORRECT!
Bob Smith (@DingleBob)
Hmmm….time to update my search engine poison pill for the FB platform. I created a small program that interacts with a web browser that selects random dictionary words and does a google search with random words. It waits a random number of seconds between searches, and varies the number of words that it chooses, so that it appears human. I didn’t know how effective it was until I got a google ad that said “Order a parthenogenetic skunk zucchini dingle in the market place and get free shipping.”
However, most other google ads presented to me at other sites are usually blank. I now leave this program running for days at a time.
I’m not blocking ads, they just don’t have the slightest clue what to sell me.
Tim
Wasn’t this the technology people were killing each other because of in the BBC drama “London Spy”? Scary indeed.
Mahhn
This spy on everyone/thing crap is way out of hand.
Anonymous
“Let’s eat Grandpa”
parse that facebook