Site icon Sophos News

Fake news: Mozilla joins the fight to stop it polluting the web

The fight against fake news has a new participant: Mozilla. The organization, which wants to keep the internet a healthy public resource, has announced its Mozilla Information Trust Initiative (MITI), which is a multi-pronged effort to keep the internet credible.

We should pronounce MITI “mighty”, according to Phillip Smith, Mozilla senior fellow for media, misinformation and trust. He explains that Mozilla started this initiative because fake news is threatening the internet ecosystem which Mozilla’s manifesto has vowed to protect.

Ecosystems can withstand some pollution, he says, which is just as well because all ecosystems have some of it. Eventually, though, the pollution reaches a tipping point. For Smith, the internet is an ecosystem, and fake information is the pollutant. He says:

The question we’re asking at Mozilla is whether it’s reaching a point where it risks tripping a positive feedback loop that’s no longer sustainable.

A multi-faceted approach

MITI will tackle fake news in several ways. It will work on products that target misinformation, both itself and with media organizations. It will also research the spread and effect of fake news (expect some reports soon), and it will also host “creative interventions” that seek to highlight the spread of misinformation in interesting ways. It gives the example of an augmented reality app that uses data visualization to show how fake news affects internet health.

Fake news has been a problem for years, but it has surfaced far more visibly of late. That’s in large part because of the 2016 US presidential election, says Smith.

There are big questions about the role this new form of online disinformation potentially played in influencing peoples’ opinion during a very important and divisive US election.

Tackling fake news is a daunting task with different challenges. One of them is its sheer volume. “It’s an asymmetrical problem,” says Smith. “Fake information is produced in exponentially larger quantities than debunks can be produced.”

Another is speed. Fake news spreads like wildfire, making it around the world with just a few unthoughtful clicks. Research shows that it takes far longer – between 13 and 14 hours after a fake story first appears – to stamp it out.

There have been different attempts to solve the problem. Some sites try to act as “debunk hubs” – go-to sites that act as authoritative voices when debunking fake news. Snopes, the grandma of all debunking sites, has been doing this for two decades. In India, Check4Spam is trying to halt the spread of fake news via WhatsApp. Buzzfeed launched Debunk in an attempt to out-virus viral falsehoods with stories correcting them.

Automating the fake news fight

Other organizations, already acting as fact-check hubs, are aiming for more automation. A tool from UK fact-checking organization Full Fact promises to scan newspaper headlines, live TV subtitles and parliamentary broadcasts for statements that match its existing database of facts. The goal is to debunk or confirm statements in real time. Representatives have likened it to an immune system for online fakery.

This idea of automated immunity has found traction among the hyperscale search engine and social media sites. With the enormous power they wield on the web, these players risk being infection vectors for fake news if they don’t become part of the solution.

Twitter seems behind the curve when it comes to fake news. It has reportedly been mulling the idea of a fake news tab, but has said little on the record, other than a mid-June blog post explaining that it’s working on detecting spammy bots.

Google has rolled out its own fact-checking tool for Google News internationally. Unlike Facebook, it isn’t relying on users to tag dodgy stories. Instead, its list of 115 partner organizations will check the facts and label the stories accordingly. They won’t be checking every story, though, and Google won’t be following a set rule to counter different opinions over whether something is fake news.

That highlights another problem for fake news fighters: it isn’t always easy to spot, or quantify. Smith points out that fake news isn’t always binary. Often, the falsehoods lie on a continuum.

“Is it mostly right, but with an incorrect fact? Is it completely fabricated?” he asks, articulating the subtleties of some fake news. “So there is a range, and I think it’s hard to automate the identification or categorization of content with that nuance.”

That doesn’t mean people aren’t trying. Full Fact is one organization behind the Fake News Challenge, which organizes artificial intelligence experts to detect fake news using natural language processing and machine learning algorithms.

It’s a good effort, but Smith says that it has its shortcomings. “None of the teams were able to produce a reliable model for categorizing content that has the nuance that a human would require to discern,” he says.

From technology to literacy

With that in mind, should we be using technology to pick news stories for readers, or simply to advise them? Smith says that technology has a place, but shouldn’t overstep its bounds.

We believe that Firefox users are smart people and are capable of making these decisions or discernments themselves.

Google won’t use its fact-checking information to alter search results, but Facebook wants to use its own algorithms to alter content rankings.

The social media giant has introduced a tag that enables people to report fake news stories (although the reporting option doesn’t appear to have rolled out across all countries yet).  It has partnered with third party organizations like Snopes that support Poynter’s fact-checkers’ code.

Facebook, which already collects vast amounts of data about how you interact with its site, has vowed to watch whether reading an article makes people less likely to share it. It will fold this into its rankings, it warned.

The thing is, Facebook’s anti-fake news measures aren’t working that well. Untagged copies of fake news stories are still showing up on its site. Are we really ready to entrust our news choices to its code?

“I’m not sure technology is going to be the answer,” says Richard Sambrook, deputy head of school and director of Cardiff University’s Centre for Journalism. He argues that online users are ultimately responsible for their own media literacy.

They also need to take responsibility for their own news diets – and realise that if you only consume junk, it’s not good for your health! More seriously, we all need to protect against only seeing our own views reflected back at us in filter bubbles or echo chambers.

That’s where the other part of Mozilla’s work will come in. Alongside product partnerships, “creative interventions” and research, MITI’s other weapon in the fight against the spread of online misinformation is literacy. Says Smith:

There is evidence that online knowledge and education are incredibly important to the next billion people coming online “What is lacking right now is a web or media literacy for those people, or resources for those people to use in understanding their information environment.

It isn’t just newcomers to the web that may need some help with media literacy, other studies suggest. Stanford University’s recent research into this area suggests that young people – supposedly our savvy digital natives – are just as vulnerable as others when it comes to critical thinking about what they read and see online.

Mozilla has focused on literacy for a long time, Smith points out. Under MITI, it will develop a web curriculum to help with media literacy, and continue investing in Mission:Information, an existing curriculum aimed specifically at teens.

Targeting kids will be critical, warns Sambrook. “Awareness is a big part of the answer, but we also need to take media literacy more seriously from junior school onwards,” he says. “Investment in media literacy will take a generation or more to catch up.”

Smith also cites other resources to help increase media literacy, including the University of Washington open source media literacy course “Calling Bullshit”, which is available for free online here. OpenSources is curating a list of credible and non-credible sources, along with its reasons, while Full Fact has a handy checklist along with a fact-checker to help verify claims.

There are many more online resources for fact-checking, but the challenge will be getting people to use them and develop their own critical faculties, rather than relying on some opaque algorithm somewhere to make their evaluations for them.

As new fake news techniques emerge, Smith doesn’t entirely rule out the use of technology to fight it. But how we apply that technology will be critical, especially as purveyors of fake news take advantage of new techniques such as the manipulation of video using AI.

“There are pushes to create tools that identify false information created through those means,” he says, adding that AI may play a part in identifying manipulated content in the future. “That will be pretty critical very soon.”

He doesn’t rule out the idea of a common standard for uniquely hashing fake content and storing them in an accessible way, much as anti-malware companies use digital fingerprinting to identify malware. Other technologies could be used to accelerate literacy, such as by privately notifying a person when they have shared content later found to be fake.

Unless we get this right, the future looks dark, warns Sambrook, who describes a future in which Smith’s ecosystem is overrun with fake news, hopelessly polluted with an ocean of misinformation.

“The world is also becoming more polarised politically and less tolerant. I am afraid I see no signs of that being reversed. It may be a period, like the 1960s in the USA, where division eventually recedes, or it may end in war or civil violence. Given the disruption technology is bringing in all areas of the economy and employment, I’m not optimistic, I’m afraid.

Technology may have a place in fighting that future, but ultimately it’s going to come down to us. Marshall McLuhan voiced it best in 1964, five years before researchers flipped the switch on the internet’s first router: “Faced with information overload, we have no alternative but pattern recognition.”


Exit mobile version