Robot Tweets "I seriously want to kill people" prompting emergency police response
Naked Security Naked Security

Robot tweets “I seriously want to kill people”, prompts police response

When Dutch police responded to a death threat made by a local Twitter account, they discovered the culprit was nothing more than an automated bot.

Image of bad Twitter robot courtesy of ShutterstockWhen Twitter user @jeffrybooks tweeted “I seriously want to kill people” at an upcoming event in Amsterdam, police decided to pay the account owner a visit.

However, when officers questioned the owner, Jeffry van der Goot, they discovered that things were not quite as they seemed.

The 28-year-old Dutch software developer hadn’t actually typed the words himself.

Instead he was using a bot developed by technology student ‘Wxcafe’ that takes random words from his Twitter archive and, through the use of an algorithm, attempts to tweet coherent sentences.

In this instance the result proved to be a fatal mistake for the bot which had been engaging in a conversation with one of Twitter’s 23 million other bots. At the request of the Dutch authorities, it has now been terminated.

Speaking through his own Twitter account, van der Goot said:

I just got visited by the police because of a death threat my Twitter bot made.

So I had to explain Twitter bots to the police. And I can't really blame them for having to take it seriously.

I'm going to delete my bot for now, because that's what they want.

Speaking to The Guardian, van der Goot appeared a little confused over where legal responsibility for the bot’s actions should lie. While he admitted starting the bot and running it under his own ‘name’, it was, he said,

A random generator, so yes it is possible that something bad can come out of it, but to treat it as if I made that threat does not make sense to me. I feel very conflicted about it, I can see their point but it does not feel right to me that the random output of a program can be considered something I said.

Likewise, the bot’s developer Wxcafe said via Twitter that police involvement was scary, adding:

Wxcafe Tweet

Of course since I don't have any legal knowledge I don't know who is/should be held responsible (if anyone) but like. kinda scared right now.

This isn’t the first article we’ve written about where a robot has got its owners in trouble. In January, we told how a bot went on a drug buying spree and ended up getting its stash, and itself, seized by police.

In that case, the bot’s owners believed their freedom of expression rights would protect them from prosecution.

I would imagine its likely in any case like this that the answer to legal responsibility lies with whoever programmed it – arguing artistic license and freedom of speech probably isn’t going to get you off the hook.

Whether that holds true in all cases remains to be seen, and will likely be determined in the future as or when a case featuring the actions of a bot appears before a judge.

The only question that remains, however, is how they found out about the tweet in the first place. According to CBR, the offending message was not reported to them, meaning they must have found out some other way – perhaps internet surveillance does have some legitimate value after all?

Composite image of robot and hashtag courtesy of Shutterstock.