Skip to content
Naked Security Naked Security

Rise of the Twitterbots increases pressure on Twitter chief Dorsey

'Up to 15%' of Twitter accounts are bots posting spam, propaganda and fake news and driving away advertisers and investors - but social media firms are fighting back

Already having a hard time trying to convince advertisers to return, Twitter’s CEO Jack Dorsey is facing increased pressure to step down after it was revealed that a larger number of accounts than originally thought are likely to be fake.

Over the weekend, the UK’s Sunday Times reported that up to 48m – or 15% – of the social media giant’s 319m users were in fact bots.

That’s nearly twice the company’s own estimate that up to 8.5% of its accounts are managed by “bots”.

The paper’s figure reflected a University of Southern California study, which estimates that between 9% and 15% of Twitter accounts “exhibit social bot behaviors”. The study describes “social bots” as social media accounts that, instead of being managed by humans, are using technology to emulate human behavior: “controlled by software, algorithmically generating content and establishing interactions”.

The research team’s analysis revealed three distinct types of social bot:

  • Legit-looking accounts that are promoting themselves, such as recruiters and porn performers.
  • Spam accounts that are very active but have few followers.
  • Accounts that automated applications to post content from other platforms, such as YouTube and Instagram, or post links to news articles.

While bots are often seen in a negative light, the researchers acknowledged that many social bots have a positive role to play in society, performing useful functions such as disseminating news and publications and coordinating groups of volunteers. On the other hand, they also point out that these social bots are increasingly being used to

… manufacture fake grassroots political support, promote terrorist propaganda and recruitment, manipulate the stock market, and disseminate rumors and conspiracy theories.

Twitter isn’t the only social media giant struggling with this problem. Last week, advertisers in the UK, including the UK government, boycotted Google after an investigation by The Times, sister paper to The Sunday Times, revealed that brands were being promoted next to jihadist videos on Google’s YouTube platform. Content from Mercedes-Benz and Marie Curie, among others, was being displayed next to content posted by supporters of Islamic State and other extremist groups.

Since YouTube advertisements generate as much as $7.60 for their posters every 1,000 views, the brands were likely to have unwittingly channeled money to terrorist supporters.

USA Today reports that Google has responded by promising to:

  • Pull online ads from controversial content
  • Give brands more control over where their ads appear
  • Deploy more people to enforce its ad policy.

Philipp Schindler, Google’s chief business officer pledged in a blog post:

Starting today, we’re taking a tougher stance on hateful, offensive and derogatory content.

He also confirmed that the company will be “developing new tools powered by its latest advancements in AI and machine learning” to help it review questionable content more quickly.

Earlier today, The Drum reported that Twitter is also increasingly turning to automated technology to help it remove offensive material more quickly. Twitter, in fact, revealed this increased use of smart software in its biannual transparency report.

This comes as Facebook rolls out a new alert capability in an attempt to combat fake news. The new feature, according to the Guardian, flags content as “disputed”. It was trialed on a story that falsely claimed thousands of Irish people were brought to the US as slaves.

Attempting to share the story prompts a red alert stating the article has been disputed by both Snopes.com and the Associated Press.

Whether it’s fake news, extremist content, stock market manipulation or bullying, the social media giants have a responsibility to do everything in their power to remove inappropriate content from their sites. Keeping up with the wrongdoers will be a constant battle, but if they want their business models to work, they must give advertisers confidence that their brand will be shown in the best light and users confidence that they are interacting with appropriate and legitimate content.

We have yet to see whether they can keep up. Or are we seeing the beginning of the end of the social media era?

1 Comment

The harassment is so prevalent in Twitter that it was no surprise Twitter made Arstechnica’s Death Watch in 2017. They suspend some accounts, while protecting other accounts. There TOS is constantly violated, but ignored in certain situations.

One of the largest draws of Twitter is able to reach out and communicate to your favorite celebrities, etc. But they’re now leaving Twitter in droves over the random harassment of the 4chan crowd.

I really don’t think there’s anything that Twitter is capable of doing that will resurrect the platform. The only individuals seeming to be die-hard to use it are individuals who utilize it for the Anonymous movement, and they’re also the largest breakers of the TOS. The fact they allowed accounts such as Bullyville to harass individuals for years before putting a stop to it?

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!