Skip to content
Naked Security Naked Security

15,000-strong army of Twitter robots found spreading cryptocurrency spam

Researchers unearthed an army of 15,000 robot Twitter accounts plying a cryptocurrency scam.

Twitter may be fighting the bot battle, but it’s still got plenty of multi-legged e-millipedes crawling around its ecosystem.
That was evidenced by a large, cryptocurrency scam-spewing collection of robot accounts – at least 15,000 of them – found by Duo Security researchers while they were conducting a three month study.
The researchers announced the find on Wednesday at the Black Hat security conference.
The bots in this case were aimed at parting you from your precious cryptocoins with bogus posts – posts of the #Blockchain #Crypto #tokens #bitcoin #eth #etc #loom #pundix #icx #ocn #nobs #airdrop #ICO #Ethereum #giveaway type.
Of course, Twitterbots can be useful: they help keep weather, sports and other news updated in real-time, and they can help find the best price on a product or track down stolen content.
Bad bots, however, are the bane of Twitter’s existence.
For example, Twitter has recently purged tens of thousands of accounts associated with Russia’s meddling in the 2016 US presidential election.
More recently, in June, Twitter described how it’s trying to fight spam and malicious bots proactively by automatically identifying problematic accounts and behavior.


The cryptocurrency scambots found by Duo led to some valuable insights into both how robot accounts operate and how they evolve over time to evade detection.
Right now, the Duo Security researchers say the bots are still functioning, imitating otherwise legitimate Twitter accounts, including news organizations, to bleed money from unsuspecting users via malicious “giveaway” links.
The researchers even found Twitter recommending some of the robot accounts in the Who to follow section in the sidebar.
Typically, the bots first created a spoofed account for an existing cryptocurrency-affiliated account.
That spoofed account would have what appeared to be a randomly-generated screen name – say, @o4pH1x­bcnNgXCIE – but it would use a name and profile picture pilfered from the existing account.
Bolstered by all that genuine-looking window dressing, the bot would reply real tweets posted by the original account.
The replies would contain a link inviting the victim to take part in a cryptocurrency giveaway.
The accounts responsible for spreading the malicious links used increasingly sophisticated techniques to avoid automated detection, the researchers said, including:

  • Using Unicode characters in tweets instead of traditional ASCII characters.
  • Adding various white space between words or punctuation.
  • Spoofing celebrities and high-profile Twitter accounts in addition to cryptocurrency accounts.
  • Using screen names that were typos of a spoofed account’s screen name.
  • Performing minor editing on the stolen profile picture to avoid image detection.

Pumping up popularity

One job of these bots was to like tweets, in order to artificially pump up a given tweet’s popularity.
The researchers noticed that these “amplification bots” were also used to increase the number of likes for the tweets sent by other robot accounts, to give the scam an air of authenticity.
When the researchers mapped out the connections, they found clusters of bots that received support from the same amplification bots, thus binding them together.
The paper goes into far more detail regarding how complicated it is to research bots in the first place – one vexing problem, for example, is an ongoing lack of data on how many bots are on Twitter.
Does Twitter even know, itself? Can it at least give an estimate?
Unfortunately, it doesn’t matter if the answer to either question is “Yes”, given that the company doesn’t make such data public.
That made it tough for researchers to verify the accuracy of their “bot or not” models by comparing with public tweet data – instead, they had to cross-check classifiers against small data sets of already-identified bot accounts.

What next?

This is just the beginning, the researchers said in a post about the research.
They’ve open-sourced the tools and techniques they developed during their research and urged others to continue to build on the work and create new techniques to identify and flag malicious bots.
It’s all going towards keeping Twitter and other social networks “a place for healthy online discussion and community,” they said.
Readers, if any of you take the code and run with it, we’ll be interested to hear what you come up with, so please do let us know!

4 Comments

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?