Skip to content
Naked Security Naked Security

Twitter says it’s cracking down on the abuse – but is it?

Twitter's moves to tackle abuse on the platform seem to be making their mark - but there's a way to go before everyone feels safe there

Twitter’s internal numbers show it’s getting better at fighting abuse on the platform.

On Thursday, Ed Ho, general manager of Twitter’s Consumer Product and Engineering department, posted some top-line statistics regarding the impact of the company’s latest anti-abuse efforts.

As Ho tells it, last January, the company committed to speeding up its work to make Twitter a nicer place.

In February, it gave people more ways to report targeted harassment, including reporting Tweets that mention you, even if the author has blocked you. It also took steps to identify the whack-a-moles who get suspended only to go off and open new accounts.

It took other steps as well, as Ho detailed in a February blog post, including collapsing potentially abusive or low-quality tweets and introducing “safer search”, which filters out potentially offensive content.

So, how’s all that working out? Well, Ho said, there’s still work to be done, but people are now experiencing “significantly less abuse” on the platform than they were six months ago.

The stats he provided to back that up:

  • Twitter’s now taking action on 10 times the number of abusive accounts every day compared with the same time last year. It also now limits account functionality or suspends thousands more abusive accounts each day.
  • Its new systems have removed twice the number of repeat offender accounts – the whack-a-moles – over the past four months.
  • The time-outs are working. Accounts that demonstrate abusive behavior are now limited for a time, and they’re told why. As a result, naughty accounts given time-outs are generating 25% fewer abuse reports, and about 65% of them only need one time-out before they get the message.
  • The notification filters and ability to mute potentially offensive keywords are paying off in the form of fewer unwanted interactions. Blocks after @mentions from people you don’t follow are down 40%, Ho said.

Twitter didn’t offer up specific numbers to back up these statistics. Del Harvey, the company’s vice-president of trust and safety, told the Verge that Twitter will consider releasing raw data in the future.

It’s nice to hear that the war against harassment is paying off, but Ho is right: there’s still a lot to be done. That point was strongly underscored by a report from BuzzFeed, posted on Tuesday, about how Twitter is still slow to respond to incidents of abuse unless they go viral or involve reporters or celebrities.

Basically, when it comes to getting Twitter to pay attention to its own rules against abuse, it pays to know somebody. Otherwise, far too often, troll targets are going to be staring at streams of sewage in their Twitter feeds as the company blithely sends form emails that clearly show that somebody’s asleep at the wheel.

A recent example: earlier this month, as BuzzFeed reports, Maggie H. opened up her Twitter mentions and found her face – an image screengrabbed from her Twitter profile page – Photoshopped into the crosshairs of a gunsight. The image had been taken by a user that she had blocked. Another troll account tweeted a similar gunsight image with her face.

A subsequent tweet mentioned the small rural town in which Maggie lives.

She considered it stalking and filed an abuse report with Twitter. Four days later, Twitter said in a form email that the abusive account hadn’t violated its terms of service. Only after a reporter from BuzzFeed asked about the abuse reports did the troll account get suspended.

Maggie’s situation was neither new nor a one-off.

It happened last August, when Twitter told Medium software engineer Kelly Ellis that a string of 70 tweets calling her a “psychotic man hating ‘feminist’” and wishing that she’d be raped did not violate company rules forbidding “targeted abuse or harassment of others.”

Again, it wasn’t until after news outlets reported on the story that the violent, abusive tweets were finally taken down.

Over the years, Twitter’s had a long, hard battle when it comes to struggling with trolls and abuse: something CEO Dick Costolo famously noted, in a February 2015 internal memo, that the company sucked at.

For the past few years, it’s been trying really hard to not suck.

Besides the steps that Ho outlined in the Thursday blog post, Twitter has also done things like suspend several alt-right accounts in an apparent crackdown on hate speech and violent threats.

In February, the company released an artificial intelligence (AI) tool, called Perspective, an API that uses machine learning models to identify how troll-like a comment is.

Still, as BuzzFeed reports, there are scads of abusive accounts that have been reported for serial harassment and yet still live on, using the platform to threaten and torment victims.

The trick to getting Twitter’s attention seems to be, as BuzzFeed phrases it, a “cheat code of media involvement.” Kelly Ellis and Maggie H. say that there’s basically no other way to get abuse reports taken seriously than to pull some strings.

BuzzFeed quotes Ellis:

I will even sometimes DM people I know there. One case that happens pretty frequently is if someone is harassing me with multiple accounts and all the reports will come back as Twitter saying it’s not abusive. But then I talk to a friend at Twitter who says it definitely is and helps get it taken care of for me. There’s some disconnect going on internally there with their training, I think.

…and Maggie H.:

There’s no way to appeal to them and tell them why they got the decision not to remove tweets wrong, so people who are threatened basically have no choice but to go to someone with a bigger platform.

In another instance of abuse reports being dismissed, a Twitter engineer apologized for the issue, echoing Dick Costolo’s “we have to do better” line:

Costolo wrote that line years ago.

Twitter, tell us, please: what’s taking so long? After years of throwing technology solutions at the problem, and after convening a Trust and Safety Council to guide you, it’s still seemingly a daunting task to identify rape and murder threats, gunsight images trained on individuals, and targeted images of guns, as abuse.

You’re right: you have to do better.


2 Comments

It is about training and not relying on AI and canned responses. You need employees better empowered and trained to recognize the patterns when the AI fails.

While I respect that they’re trying, they need to stop using computers for this.

They recently suspended my account 2 days after I reported another user who was breaking their TOS by using 2 accounts to gang up and harass people (and used both accounts to mass DM me 70+ times in an hour).

When I sent a “why is my account suspended?” message I got the generic “This account was suspended for targeted harassment” message. When I e-mailed for more information they continued to send me the generic message and refused to tell me if the account was reported by another user for if they found the very act of reporting another user was considered “targeted harassment”. Maybe their system glitched and my account got suspended instead of the other one, who knows! I’ve asked them twice now and both times I just get the generic canned reply.

I’m not asking them for an essay on why they felt my account needed to be suspended indefinitely but the fact that they can’t answer two simple questions about how and why they came to the decision to suspend my account is surprisingly unprofessional.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?