Last week, an internal memo from Twitter co-founder and CEO Jack Dorsey leaked online. Following Rose McGowan’s account being temporarily disabled over her Harvey Weinstein tweets, the memo included new policies for violent groups, hate speech and revenge porn.
Well, having the calendar leaked wasn’t the way Twitter had planned it, Dorsey said, but the memo was accurate: the company planned to release an “internal shipping calendar” detailing when multiple bully-blocking and troll-fighting features will be implemented. The calendar includes changes Twitter plans to make to the Twitter Rules, how it communicates with people who violate them and how its enforcement processes work.
“This makes us feel uncomfortable because it’s a work in progress & rough, but it’s the right thing to do,” Dorsey said. “We believe showing our thinking and work in real-time will help build trust.”
Here’s some of what Twitter’s calendar, released on Thursday, has in store for us over the coming months:
October:
Accounts found posting nonconsensual nudity – what’s also commonly called revenge porn – will be suspended. The category also includes content taken without the victim being aware, such as upskirt photos and video from hidden or hacked webcams. Twitter says the new policies “err on the side of protecting victims.” The company also says that users can expect “a better experience for suspension appeals” if they believe an account was wrongfully suspended.
November:
Twitter will ban hateful imagery, hate symbols and hateful display names. That last one includes nameflaming: when someone changes their display name to insult someone.
Twitter will begin notifying suspended users via email. Accounts belonging to “groups that use violence to advance their cause” will be suspended. Hate speech and imagery will come with a warning, and hate images will be banned from headers and avatars. Internally, Twitter will begin using a new system to prioritize reports about accounts that violate its rules.
December:
Twitter now removes content that includes violent threats or wishes for serious harm. It will expand that to include content that glorifies or condones “acts of violence that result in death or serious harm.” The platform will introduce improved ways for people who see abuse – what Twitter calls “witnesses” – to report what they see. Twitter will send updates on what, if anything, comes of a witness’ report. Twitter also says it will be using “past relationship signals” to curb “unwanted sexual advances.”
January:
The witness reporting review updates will be rolled out to all.
It all sounds good, doesn’t it? It always sounds good when Twitter promises to stop sucking at dealing with abuse and trolls. But somehow, Twitter’s sucking persists.
In fact, Axios has tallied five other times that Twitter has pledged to crack down on abuse since 2013. This is Twitter’s calendar of nice-try’s, by Axios’ tally:
- 28 July 2013: Rabid trolls prompt Twitter to promise a Report Abuse button on all messages.
- 2 December 2014: Twitter rolled out new anti-trolling tools and promised quicker abuse investigation.
- 4 February 2015: Then Twitter CEO Dick Costolo admitted that the company sucked at dealing with trolls.
- February 2016: Twitter established the Twitter Trust & Safety Council to help it banish trolls. This council included organizations such as the Anti-Defamation League, the Center for Democracy and Technology, and perhaps most notably, Feminist Frequency – founded by Anita Sarkeesian, whose experiences facing unrelenting internet abuse have gained worldwide attention.
- 10 March 2017: Twitter finally scrambled those anonymous egg accounts, allowing users to filter them out, along with accounts that have unverified email addresses or phone numbers.
That can’t be a complete list, can it? It feels like Twitter comes up with some new way to clean itself up at least bimonthly.
Still, in spite of all its efforts to come up with new systems and new processes to automatically strain out the sludge, we get stories like that of Xyla Foxlin: one of the more recent tales of Twitter users harassed for months. It took Foxlin two months of reporting abuse before the troll was tracked down and the abusive account suspended.
During that time, she said, Twitter support was “a bot.” In other words, it was a grueling process of getting a real, live person to actually review the harassment and to take action. It was only when Foxlin got help from a friend who works at Twitter that she got relief from the insults, threats and doxing.
Hers isn’t an isolated story, and her salvation – knowing somebody who works at Twitter – isn’t a one-off, either. In July, BuzzFeed reported that Twitter, after all its efforts to automagically make trolls disappear, is still slow to respond to incidents of abuse unless they go viral or involve reporters or celebrities.
Basically, when it comes to getting Twitter to pay attention to its own rules against abuse, it pays to know somebody.
I didn’t see anything on Twitter’s calendar that addresses the fact that there aren’t enough humans in the mix. We need humans to eye a given report to ascertain the nuance and context of a given threat or insult. We need humans to intercede in order to make Twitter a safer space.
Is such a thing as human intervention possible when dealing with the surging growth of Twitter traffic? Human intervention, as in egalitarian protection applied not just to the cases of harassed celebrities or people whose stories have gone viral but for every harassed user – the famous and the obscure?
Maybe that’s a pie in the sky notion from a technical perspective. But it would be a good to see a Twitter troll-easement day planner that mentioned hiring a whole hell of a lot more people to dive into the stream of effluent on behalf of the harassed.