Skip to content
Naked Security Naked Security

Artist rigs up Google Assistant to (sometimes) fire a gun on command

"A robot may not injure a human being or, through inaction, allow a human being to come to harm." But, what if a robot breaks the first law of robotics?

Isaac Asimov’s first law of robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The robotics laws are fictional, notes artist and robot maker Alexander Reben. So, he asked himself, why not make a robot that breaks the first rule? He then created a robot that punctures your finger with a lancet, as you can see in this video:

…after which he continued his inquiry into robot-human interaction, which has included pleasure (robots that massage your scalp); intimacy (cardboard robots as cute as baby seals that get people to open up by asking them intimate questions, which people seem pretty happy to answer); on up to ethics, as in, robots that could be used to kill people.
A recent video from Reben shows the artist deploying artificial intelligence (AI), in the form of Google Assistant, to shoot an air pistol at an apple.

During the TED talk in which he displayed the intimacy/pleasure/stabby robots, Reben noted that the finger-puncturing robot chose to stab somebody in a way that Reben says he couldn’t predict.
Reben, who claims that this is the first robot “to autonomously and intentionally break Asimov’s first law,” says that the robot decides for each person it detects if it should injure them, and that the decision is unpredictable.
In the 28-second video, Reben says “OK Google, activate gun,” to his Google Home smart speaker, though he could have used Amazon Echo. Next to his Home is some sort of air pistol that then fires at an apple, which is on a pedestal. The apple tumbles off as Google Assistant says, “OK, turning on the gun.”
Reben told Engadget that he built the robot using parts lying around his studio: besides the Google Home, he used a laundromat change-dispensing solenoid, a string looped around the gun’s trigger, and a lamp-control relay.
Reben:

The setup was very easy.

The artist told Engadget that it’s not the robot that matters; what really matters is the conversation about AI that’s smart enough to make decisions on its own:

The discourse around such an apparatus is more important than its physical presence.

Just as the AI device could have been any of the assistants that anticipate their owners’ needs – be it Google Home or Alexa or what have you – so too could the triggered device have been anything, Reben said, such as that back massaging chair he previously set up. Or an ice cream maker.
Or any automation system anywhere, for that matter: alarm clocks, switches for turning lights on and off when you’re on vacation, in an attempt to convince burglars you actually aren’t on vacation, etc.
This is certainly not the first incident of technology turned lethal with things kicking around the house: we’ve seen hobby drones turned into remote-control bombs, a remote-controlled quadcopter drone equipped with a home-made flamethrower, and a flying drone that fires a handgun.
Yes, Reben says, there are many automated systems, be they coffee makers or killer drones and sentry guns. But typically, they either involve a human who makes decisions or the system is “a glorified tripwire.”


How his AI-enabled robot differs is the decision-making process it makes, he says.

A land mine, for instance, is made to always go off when stepped on, so [there’s] no decision. A drone has a person in the loop, so no machine process. A radar-operated gun again is basically the same as a land mine. Sticking your hand into a running blender is your decision, with a certain outcome.

Reben says that we’ve got to confront the ethics:

The fact that sometimes the robot decides not to hurt a person (in a way that is not predictable) is actually what brings about the important questions and sets it apart. The past systems also are made to kill when tripped or when a trigger is pulled, hurting and injuring for no purpose: [what] is usually seen as a moral wrong… now that this class of robot exists, it will have to be confronted.

Do we need to confront the ethics? Of course. But people – all the way up to weapons experts at the United Nations, who’ve considered the future of what are formerly known as Lethal Autonomous Weapons Systems (LAWS) – have been doing that for many years, no Google Home voice assistant or chunky applesauce necessary.
These issues aren’t new with Reben’s creation. One of the more recent cases of debate about LAWS erupted when thousands of Google employees protested the company’s work with the Pentagon on Project Maven – a pilot program to identify objects in drone footage and to thereby better target drone strikes.
That’s not us, the employees said. That’s not the Google we know – the “Don’t Be Evil” company.
About a dozen of them reportedly quit in mid-May.
There are worthy debates happening around these questions. Readers, do you believe that Reben raises any new issues that we haven’t yet encountered elsewhere, within the halls of AI powerhouse Google, the UN, or beyond?
Please do tell us what you think in the comments section below.


14 Comments

Since the time robot laws were created there have been unscrupulous people thinking of ways to circumvent them

Reply

“Reben, who claims that this is the first robot “to autonomously and intentionally break Asimov’s first law,””
Nah.
The competition took it one step further: Uber – the killer autonomous car. :/

Reply

Interesting example, although Uber would argue (correctly IMO) that because the “intent” of the car was clearly to avoid death or injury, and because there was no “devil’s choice” invoved (e.g. colliding with one person to avoid colliding with a group), it’s not an ‘autonomous intention’, it’s a bug.
The problem I’ve got with Reben’s, errr, self-promotion, sorry, art is that it’s not at all clear why this is “autonomously and intentionally breaking the First Law,” when the robot was clearly programmed to have the characteristic that it *could* break the First Law, and indeed would do so reliably enough to make a video of the injury.
IIRC, Asmiov’s Laws were guidelines for how robots should be constructed, i.e. with some sort of failsafe that would prevent the laws being violated, either by accident or design. That “stabber” robot wasn’t built to comply with the First Law, so it surely can’t be said to have stepped outside of it of its own accord?

Reply

I agree that both the Uber car was not designed to do harm, and that the stabber bot was not autonomous (as in AI), more of a random generator (activate/or not). While the car does qualify as AI (at least more than the bot). (going off the deep end here) How do we know the Uber AI didn’t go “what if” and ka-smash. After all they said the AI did see her, just made a bad decision changing its mind to a False Positive. It just didn’t Value her enough.

Reply

Remember that Asimov’s laws were predicated upon artificial intelligence that was actually intelligent, in the most general sense, not in the sense of driverless cars :-)
But I get your point. If the First Law applies (and I suppose from a regulatory point of view, it does – Uber’s car is not supposed to run people over, after all) then failing to stop and therefore harming a human through inaction (not hitting the middle pedal) is sort of what happened.
I’d imagine that the facts in the Uber case will turn out to be more nuanced – the car (if it were sentient) might argue that it didn’t realise there was a human in the road, and therefore the First Law instict never kicked in at all.

Reply

Glad I wasn’t the only one thinking this. The fact that the Three Laws exist in certain fiction (and in many of our minds as highly sensible once they become pertinent) has nothing to do with a device/robot which wasn’t built around them.
If the Laws are hard-coded, seems to me the only way a robot could break them is just like Mahhn’s Uber example or the one (Tesla?) that failed to brake, thereby decapitating its occupant under a semi trailer: through a bug.

Reply

Or if the AI becomes intelligent enough to figure out how to circumvent the hard coded rules, if what it wants to do requires it to get around them somehow.

Reply

I thought the first robot to autonomously and intentionally break Asimov’s first law was a cruise missile.

Reply

Time Magazine reported in 2015 that Google’s parent company removed the “Don’t Be Evil” motto from their official code of conduct, and then I read that in early May of this year (2018), Google mostly removed it from their official code of conduct as well, leaving only a brief mention at the end.
This could be because “Don’t Be Evil” is a vague, subjective motto that cannot be used for any kind of legal defense. That’s much more likely than a VP thrumming his fingers together, muttering that “NOW I can be truly evil!” and proceeding on some sort of villainous rampage.

Reply

“this is the first robot “to autonomously and intentionally break Asimov’s first law,” says that the robot decides for each person it detects if it should injure them, and that the decision is unpredictable”.
Err…. no its not.
To intentionally break a law, one would first need to demonstrate awareness of the law and the consequences of breaking it.
The Stabby Robot merely randomly chooses to perform a particular action, there no evidence it is aware of the first law and has been specifically programmed to follow that law or that it is aware what hurting a human actually means.
In the second example all i see is a voice activated gun hardly proof of anything.
He is just a snake oil salesman in to self-aggrandisement from what i can see.

Reply

Indeed, the phrase “the decision is unpredictable” got me thinking, too. Assuming Reben is speaking with precision (and he’s invoking the late, great I. Asmiov’s Laws of Robotics here, so you’d jolly well hope so!) then this statement seems to vanish in a puff of paradox. If the “decision” is truly unpredictable, it must surely be non-deterministically random, and therefore isn’t a “decision” in the sense intended.
Still, he rigged up a pellet gun to an electric motor with a piece of string and got a TED talk out of it, all expenses paid (including what TED calls “excellent accommodation”)…
…so guess who’s having the last laugh :-)

Reply

I got my Echo to turn on a drill press, I wanted to have my hands free. It worked well, but it doesn’t turn off because of the added noise.

Reply

Hahah, that’s the makings of an AI-driven B-Movie ending! I nominate a 35-year sequel:
Runaway 2: Loud and Clear

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!