Skip to content
MQ-9 Reaper
Naked Security Naked Security

UN asks if robots should be allowed to kill humans

As military software becomes more mature and more powerful it is increasingly trusted to act autonomously, but there is one crucial decision that we've not yet ceded; the decision to end human lives.

MQ-9 ReaperRight now our notions of cyber security are largely confined to the virtual world of networks and computers, and the damage that software can do to other software.

Software has also been harnessed by lethal weapons and military machinery for decades now.

As military software becomes more mature and more powerful it is increasingly trusted to act autonomously but there is one crucial area of decision making that we’ve not yet ceded; the decision to end human lives.

A meeting of experts drawn from signatories of the Convention on Conventional Weapons (CCW) met at the United Nations in Geneva on Monday to consider the future of Lethal Autonomous Weapons Systems (LAWS) before we cross that cyber-Rubicon.

The meeting is the second of its kind and represents a laudable attempt by the CCW member nations to get to grips with an emerging set of lethal technologies before they are widely deployed.

As Michael Møller, Acting Director-General of the United Nations Office at Geneva, pointed out at a similar meeting in May 2014, “All too often international law only responds to atrocities and suffering once it has happened.”

There is, as Jody Williams – Nobel Peace Laureate and co-founder of the Campaign to Stop Killer Robots – noted in her opening briefing, still time to say no:

This is a decision that we as human beings can make. It is a decision that we must make. We must not cede the power of life and death over other human beings to autonomous machines of our own design. It is not too late.

Some of the questions that ‘killer robots’ raise for human rights were neatly captured by UN Secretary General Ban Ki-moon in 2013 when he wrote:

Is it morally acceptable to delegate decisions about the use of lethal force to such systems? If their use results in a war crime or serious human rights violation, who would be legally responsible? If responsibility cannot be determined as required by international law, is it legal or ethical to deploy such systems?

The Campaign to Stop Killer Robots is (as you might imagine) calling for a preemptive ban on LAWS but so far the UN hasn’t decided to do anything more than talk about them (although the decision to do so was taken by consensus, which is something the campaign described as ‘a rare feat’).

If the discussions do ultimately result in a decision to ban LAWS before they become a reality it would be unusual but not the first time a weapon has been banned before it’s deployed; in July 1995 the UN did exactly that when it adopted a protocol on blinding lasers in similar circumstances.

So, it’s not too late to stop battlefield robots that make their own killing decisions becoming a reality but time is of the essence.

Remotely controlled or semi-autonomous drones are already a fixture in the world’s high tech armed forces and the sheer number and variety of robots spilling out of military labs like the Defense Advanced Research Projects Agency (DARPA) is testament to where the money is being spent.

The idea that we should actually stop and think about the profound implications of the technology and future we are rushing blindly towards seems to be gaining some traction.

The last few months has seen Oxford University researchers and a succession of STEM hall-of-famers like Stephen Hawking, Bill Gates, Steve Wozniak and Elon Musk warn about the potential dangers of Artificial Intelligence (AI).

Many of them, and many, many more experts from the fields of computer science and AI, also signed an open letter calling for research priorities focused on maximising the societal benefit of AI.

At the same time, Stanford University has kicked off a “100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play”.

One of the Stanford study’s founding committee members, Deirdre K. Mulligan, describes the effort in a way that makes it clear they intend to influence the way AI is developed, not merely study it.

The 100 year study provides an intellectual and practical home for the long-term interdisciplinary research necessary to document, understand, and shape AI to support human flourishing and democratic ideals.

All around the world some very clever people with a slightly better view over the horizon than the rest of us are getting ready to head off our new robot overlords.


Image of  MQ-9 Reaper for the U.S. Air Force by Paul Ridgeway is in the public domain.

0 Comments

I don’t know that it’s so black and white. Full autonomy would preempt the enemy from hacking into our killing machines and using them against us. Remotely controlled killing machines will always be susceptible to such attack, and I don’t know which set of risks I prefer.

Reply

This is, on a slightly higher level, the same debate that applies to autonomously driven vehicles, only for self-driving cars there will be an immediate insurance impact (pun intended) so it will be sorted out.
But it WILL happen and so, I suspect, will LAWS, sadly – if not by the UN then by non-signatory (and less ethically bound) countries.

Reply

Robert Brewster: Skynet? The virus has infected Skynet?

John Connor: Skynet IS the virus. It’s the reason everything’s falling apart!

Terminator: Skynet has become self aware. In one hour it will initiate a massive nuclear attack on its enemy.

Robert Brewster: What enemy?

John Connor: Us! Humans!

Reply

A drone is a tool in the hands of the operator, much like a rifle or shoulder mounted Stinger missile. Ultimately there is very little difference to the target whether his life is ended by the pull of a sniper’s trigger finger or by the movement of a joystick and press of a button.

Reply

That’s the whole point.

A drone is a tool in the hands of an operator and the operator, no matter how much she’s assisted by the technology, is in charge of the decision to kill.

A Lethal Autonomous Weapon System makes its own decision to kill.

Reply

We -the stupid public- are being bamboozled into thinking this discussion is new.

I give it to you that Lethal Autonomous Weapons Systems have been with us for ages now: the conventional terms used are landmine and seamine.

So the problems mentioned in the article have already been hashed out ages ago: the AI may be more complicated now but is is not more autonomous!

Reply

Traps like a covered pit with sharp sticks in the bottom have been around a while too. As they trigger based on weight, I guess their AI is at the same level as Lethal Autonomous Weapons Systems like landmines. They’ve been used against humans since at least Julius Caesar.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!