Right now our notions of cyber security are largely confined to the virtual world of networks and computers, and the damage that software can do to other software.
Software has also been harnessed by lethal weapons and military machinery for decades now.
As military software becomes more mature and more powerful it is increasingly trusted to act autonomously but there is one crucial area of decision making that we’ve not yet ceded; the decision to end human lives.
A meeting of experts drawn from signatories of the Convention on Conventional Weapons (CCW) met at the United Nations in Geneva on Monday to consider the future of Lethal Autonomous Weapons Systems (LAWS) before we cross that cyber-Rubicon.
The meeting is the second of its kind and represents a laudable attempt by the CCW member nations to get to grips with an emerging set of lethal technologies before they are widely deployed.
As Michael Møller, Acting Director-General of the United Nations Office at Geneva, pointed out at a similar meeting in May 2014, “All too often international law only responds to atrocities and suffering once it has happened.”
There is, as Jody Williams – Nobel Peace Laureate and co-founder of the Campaign to Stop Killer Robots – noted in her opening briefing, still time to say no:
This is a decision that we as human beings can make. It is a decision that we must make. We must not cede the power of life and death over other human beings to autonomous machines of our own design. It is not too late.
Some of the questions that ‘killer robots’ raise for human rights were neatly captured by UN Secretary General Ban Ki-moon in 2013 when he wrote:
Is it morally acceptable to delegate decisions about the use of lethal force to such systems? If their use results in a war crime or serious human rights violation, who would be legally responsible? If responsibility cannot be determined as required by international law, is it legal or ethical to deploy such systems?
The Campaign to Stop Killer Robots is (as you might imagine) calling for a preemptive ban on LAWS but so far the UN hasn’t decided to do anything more than talk about them (although the decision to do so was taken by consensus, which is something the campaign described as ‘a rare feat’).
If the discussions do ultimately result in a decision to ban LAWS before they become a reality it would be unusual but not the first time a weapon has been banned before it’s deployed; in July 1995 the UN did exactly that when it adopted a protocol on blinding lasers in similar circumstances.
So, it’s not too late to stop battlefield robots that make their own killing decisions becoming a reality but time is of the essence.
Remotely controlled or semi-autonomous drones are already a fixture in the world’s high tech armed forces and the sheer number and variety of robots spilling out of military labs like the Defense Advanced Research Projects Agency (DARPA) is testament to where the money is being spent.
The idea that we should actually stop and think about the profound implications of the technology and future we are rushing blindly towards seems to be gaining some traction.
The last few months has seen Oxford University researchers and a succession of STEM hall-of-famers like Stephen Hawking, Bill Gates, Steve Wozniak and Elon Musk warn about the potential dangers of Artificial Intelligence (AI).
Many of them, and many, many more experts from the fields of computer science and AI, also signed an open letter calling for research priorities focused on maximising the societal benefit of AI.
At the same time, Stanford University has kicked off a “100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play”.
One of the Stanford study’s founding committee members, Deirdre K. Mulligan, describes the effort in a way that makes it clear they intend to influence the way AI is developed, not merely study it.
The 100 year study provides an intellectual and practical home for the long-term interdisciplinary research necessary to document, understand, and shape AI to support human flourishing and democratic ideals.
All around the world some very clever people with a slightly better view over the horizon than the rest of us are getting ready to head off our new robot overlords.
Image of MQ-9 Reaper for the U.S. Air Force by Paul Ridgeway is in the public domain.