Isaac Asimov’s first law of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The robotics laws are fictional, notes artist and robot maker Alexander Reben. So, he asked himself, why not make a robot that breaks the first rule? He then created a robot that punctures your finger with a lancet, as you can see in this video:
…after which he continued his inquiry into robot-human interaction, which has included pleasure (robots that massage your scalp); intimacy (cardboard robots as cute as baby seals that get people to open up by asking them intimate questions, which people seem pretty happy to answer); on up to ethics, as in, robots that could be used to kill people.
A recent video from Reben shows the artist deploying artificial intelligence (AI), in the form of Google Assistant, to shoot an air pistol at an apple.
During the TED talk in which he displayed the intimacy/pleasure/stabby robots, Reben noted that the finger-puncturing robot chose to stab somebody in a way that Reben says he couldn’t predict.
Reben, who claims that this is the first robot “to autonomously and intentionally break Asimov’s first law,” says that the robot decides for each person it detects if it should injure them, and that the decision is unpredictable.
In the 28-second video, Reben says “OK Google, activate gun,” to his Google Home smart speaker, though he could have used Amazon Echo. Next to his Home is some sort of air pistol that then fires at an apple, which is on a pedestal. The apple tumbles off as Google Assistant says, “OK, turning on the gun.”
Reben told Engadget that he built the robot using parts lying around his studio: besides the Google Home, he used a laundromat change-dispensing solenoid, a string looped around the gun’s trigger, and a lamp-control relay.
Reben:
The setup was very easy.
The artist told Engadget that it’s not the robot that matters; what really matters is the conversation about AI that’s smart enough to make decisions on its own:
The discourse around such an apparatus is more important than its physical presence.
Just as the AI device could have been any of the assistants that anticipate their owners’ needs – be it Google Home or Alexa or what have you – so too could the triggered device have been anything, Reben said, such as that back massaging chair he previously set up. Or an ice cream maker.
Or any automation system anywhere, for that matter: alarm clocks, switches for turning lights on and off when you’re on vacation, in an attempt to convince burglars you actually aren’t on vacation, etc.
This is certainly not the first incident of technology turned lethal with things kicking around the house: we’ve seen hobby drones turned into remote-control bombs, a remote-controlled quadcopter drone equipped with a home-made flamethrower, and a flying drone that fires a handgun.
Yes, Reben says, there are many automated systems, be they coffee makers or killer drones and sentry guns. But typically, they either involve a human who makes decisions or the system is “a glorified tripwire.”
How his AI-enabled robot differs is the decision-making process it makes, he says.
A land mine, for instance, is made to always go off when stepped on, so [there’s] no decision. A drone has a person in the loop, so no machine process. A radar-operated gun again is basically the same as a land mine. Sticking your hand into a running blender is your decision, with a certain outcome.
Reben says that we’ve got to confront the ethics:
The fact that sometimes the robot decides not to hurt a person (in a way that is not predictable) is actually what brings about the important questions and sets it apart. The past systems also are made to kill when tripped or when a trigger is pulled, hurting and injuring for no purpose: [what] is usually seen as a moral wrong… now that this class of robot exists, it will have to be confronted.
Do we need to confront the ethics? Of course. But people – all the way up to weapons experts at the United Nations, who’ve considered the future of what are formerly known as Lethal Autonomous Weapons Systems (LAWS) – have been doing that for many years, no Google Home voice assistant or chunky applesauce necessary.
These issues aren’t new with Reben’s creation. One of the more recent cases of debate about LAWS erupted when thousands of Google employees protested the company’s work with the Pentagon on Project Maven – a pilot program to identify objects in drone footage and to thereby better target drone strikes.
That’s not us, the employees said. That’s not the Google we know – the “Don’t Be Evil” company.
About a dozen of them reportedly quit in mid-May.
There are worthy debates happening around these questions. Readers, do you believe that Reben raises any new issues that we haven’t yet encountered elsewhere, within the halls of AI powerhouse Google, the UN, or beyond?
Please do tell us what you think in the comments section below.