Skip to content
Naked Security Naked Security

‘Bot or Not?’ – a game to train us to spot chatbots faking it as humans

Can you tell whether you're talking to a human or AI?

Who doesn’t know their mother’s maiden name?!
A bot that’s trying to convince you it’s human but which hasn’t been programmed to answer that question or improvise very convincingly, that’s who. Or, as I said when I finished playing a new online Turing Test game called Bot or Not, NAILED IT!!

Bot or Not asking for my mother's maiden name
Bot or Not asking for my mother’s maiden name

Bot or Not is an online game that pits people against either bots or humans. It’s up to players to figure out which they’re engaging with in the 3-minute game, in which they’re forced to question not only whether their opponent is human but exactly how human they themselves are.
The creators of Bot or Not – a Mozilla Creative Awards project that was conceived, designed, developed and written by the New York City-based design and research studio Foreign Objects – say that these days, bots are growing increasingly sophisticated and are proliferating both online and offline. It’s getting tougher to tell who’s human, which can come in handy in customer service situations but is a bit scary when you think about scam bots preying on us on Tinder and Instagram, or corporate bots that try to steal your data.

The friendly face of pervasive surveillance

In their explanation of Bot or Not’s purpose, the game’s creators point to a recent Gartner industry report that predicted that by 2020, the average person will engage in more conversations with bots than with their spouses.
Think about it: how often do you talk to voice assistants like Siri or OK Google? Chatbots have become seamlessly integrated into our lives, presenting what Foreign Objects calls “a massive risk to privacy” and will remain so for as long as collecting personal data remains the primary business model for major tech platforms.

Big tech knows that in order to get the most data out of our daily lives, they need us to invite bots into our homes, and to enjoy ourselves while we do so.

One example: smart speakers, those always-listening devices that are constantly surveilling our homes. As we’ve reported in the past, smart speakers mistakenly eavesdrop up to 19 times a day. They record conversations when they hear their trigger words… or by something that more or less sounds like one of their trigger words. Or by a burger advertisement. Or, say, by a little girl with a hankering for cookies and a dollhouse.
Last year, smart-speaker makers found themselves embroiled in backlash over privacy after news that smart speakers from both Apple and Google were capturing voice recordings that the companies were then letting their employees and contractors listen to and analyze. Both companies suspended their contractors’ access.


What does Bot or Not have to do with all that? Foreign Objects says that while government regulation is struggling to keep up with new technologies, there’s little public awareness or legal resistance to stop companies from developing a global surveillance network on an unprecedented scale – something that’s already been done on a massive scale with the plethora of devices with smart assistants.

Governments are not only lagging behind on policy, they are also part of the problem.

This is about more than these devices listening in on our private moments. It’s about big-tech corporations willingly handing over citizens’ private data to police without consent, Foreign Objects says.

As chatbots slide seamlessly into our personal and domestic lives, it has never been more important to demand transparency from companies and policy initiative from regulators.

Smart speakers running on artificial intelligence (AI) are one thing. Chatbots, however, are taking data interception to a whole new level, say the creators of Bot or Not:

In the hands of big platforms, chatbots with realistically human-like voices are openly manipulative attempts to gather our data and influence our behaviours.

They point to advanced “duplex” chatbots released in the last few years by Microsoft and Google, so-called because they can speak and listen at the same time, mimicking the experience of human conversation. If you’re wondering how that might feel, you can look to Google’s Duplex neural network AI, introduced last year and designed to sound and respond like a human being, down to all the “umms” and “aahs.”
It was too real. Google faced a backlash over its failure to disclose that the person on the other end of the line – a supposedly human hairdresser taking a customer booking was one such – was actually a bot.
Sociologist of technology Zeynep Tufecki’s response at the time:

[The lack of disclosure is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.

Deception: “It’s a feature, not a bug”

Google later added a disclosure feature to Duplex’s interactions, but Bot or Not’s creators aren’t sure that a warning label is enough. They liken these human-like voice chatbots to deepfakes in their potential to give rise to entirely new forms of deception and abuse, particularly to those who are already vulnerable to bot-based scams, such as the elderly.
These things are meant to trick us into thinking they’re human, Foreign Objects points out. Google didn’t screw up with those “umms” and “aahs.” Deception is part of parcel of the design:

There is a fundamental contradiction in human-like service bots. On one hand, legally and ethically, they need to disclose their artificiality; on the other, they are designed to deceive users into thinking, and acting, as if they were also humans. Duplex stunned audiences because its ‘um’s and ‘ah’s’ mimic the affect and agency of a fellow human being.

I found Bot or Not pretty easy to nail as a bot. I mean, come on, it didn’t know its own mother’s maiden name.

But would I have the same ease with Google Duplex? … and what does it all matter?
It matters when bots/AI/voice assistants get pulled into court to provide evidence in trials, for one. It’s happened before, Foreign Objects points out: in 2017, Amazon had to fight to keep recordings from its Echo IoT device out of court in a murder case.
Amazon claimed that Alexa’s data was in fact part of Amazon’s protected speech. … which, some have argued, might in fact bestow First Amendment protections. And this is why that matters, according to Foreign Objects:

In the US, First Amendment protections would mean that the makers of bots, like Google, Amazon and countless others, could not be held responsible for the consequences of their creations, even if those bots act maliciously in the world. All the same, … insisting that expressions made by ‘bots’ are strictly the speech of their creators comes wrapped up in its own complications, especially when humans are conversing daily with bots as friends, therapists, or even lovers.

In light of AI advancement, it’s important to be on guard as we engage with these chatbots in ever more intimate contexts such as these. We should all bear in mind that no matter how “LOL,” “IDK” and “ahhh”-ish they come off as, they are, in fact, surveillance-gathering tools. Does it matter whether they’re corporations or crooks trying to get at our data?
Either way, Foreign Objects says, this is privacy invasion in the ever-growing web of pervasive surveillance.

11 Comments

A shift in mindset regarding personal information is in order. PI has been a problem for decades. As one example, witness the issues with medical records which belong to the doctor/hospital/clinic not the patient who generates the information and up until recently often did not have easy access to their own data.
Personal information should be reclassified as personal property and subject to property rights of the owner. As an example, if I make a transaction with Amazon I might grant them a limited license to use my property only to the extent needed to complete the transaction and that information must be discarded immediately after. Any other use of the data would require up front and explicit permission and in the case of data delivered to outside sources for profit, I’d want payment for the use of my property just like residential property that I rent or lease to you. This would create a whole new dynamic and admittedly scramble the business models of companies like Amazon, Facebook, etc., but would go a long way towards rebuilding a more honest transparent and safe net environment.

Reply

That’s actually a great idea, except for calling it property “like residential property,” maybe. Our PI should be considered our IP, and have all the same restrictions and problems for the companies using it as they cause us with all the things they “grant a license” to use of theirs. (e.g. If you suspect they might be sharing your PI improperly, just send a takedown notice and they must close up shop based solely on your allegation or face full legal liability for anything that information is used for by anyone else, etc.)

Reply

Well, we have something very similar to what you are asking for in EU and it is called GDPR. One thing is to have a law, the other thing is to force “big techs” to comply.

Reply

A.I. has masqueraded as human for a long time.
Many of the duplex bots and ubiquitous chatbots or twitbots can be pretty convincing (ergo this article). But one technique perplexing me is fake keyboard sounds on voice support calls. Comcast comes first to mind, but there are others doing it.
I see you’d like to talk with an agent about your. [Internet]. connection. Let me see if someone’s available.
clickity click click clack
With limited responses such as an infinite loop of “I’m sorry, I didn’t get that–please say yes or no.” at their disposal, they’re not fooling anyone. Still I’ve always wondered why those SFX are added to suggest that a sentient being is “typing” into their support manual database. They waste my time on a charade instead of advancing me two seconds earlier.
My inner cynic asserts they *want* me to wait longer and simultaneously let me know they think I’m an idiot for doing it.

Reply

I don’t want to sound paranoid, but what is the chance that the results of this “game” are going to be used to improve the AI underlying the bots? This game provides exactly the sort of real world testing that you would want to improve the algorithms that bots are using, isn’t it?

Reply

Thank you for your response. My algorithm is complete. I can now properly emulate a real human.

Reply

I really do think that there needs to be some ethics to this. For example, I use an AI tool all the time to help me with spelling and grammar. I have called the internet service provider to “refresh” my modem. But to use a bot and not tell anybody about it is seriously a low (i.e. Google Duplex). I had to youtube some of those calls and I was amazed and shocked at the same time on voice inflection, mannerisms, etc.
I really think there need to be AI laws!

Reply

I hate it when they put so many spelling errors in bots to make it look like they’re human. Also this bot seems to like going political quite fast. Tell it you support Trump and it will hate you for it. hahahaha

Reply

I tried this “test” many times and I must say, this “Turing Test,” for me, failed every time. I guessed correctly that it was a bot every time. The responses were too quick to come back, they were nonsense or out of context. I also never got a real human to play against, which was disappointing.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!