The potential dangers of Artificial Intelligence have attracted a lot of publicity this year with experts seemingly queuing up to tell us how bad it could be.
Up to now much of the commentary has focused on the far off potential dangers that might await us in a few generations if we make bad decisions now.
The most startling warning has been about the threat of a technological singularity. The singularity is the theoretical point of no return where a general Artificial Intelligence starts to recursively self-improve far beyond our ability to understand or control it.
Concerns about the singularity are very real but it’s a fuzzy, far off risk that, for some at least, is easily dismissed because it sounds not unlike a sci-fi movie starring Arnold Schwarzenegger.
There is another part to the conversation that is much more concrete though; the very real, very imminent dangers posed by battlefield robots with enough AI to select and kill their own targets – so-called Lethal Autonomous Weapon Systems (LAWS).
LAWS rely on technology that already exists, they will be with us in years rather than decades and they could change everything.
A lot of people, not least the Campaign to Stop Killer Robots, would like to see the weapons banned before that can become a reality.
Signatories to the United Nation’s Convention on Conventional Weapons (CCW) have already met to consider the future of LAWS as part of a process that could lead to an eventual ban.
The Future of Life Institute has now leant its support to the cause with an open letter signed by over 1000 people including hundreds of AI researchers and well known names from the fields of science, technology and engineering.
Signatories include the SpaceX and Tesla Motors founder Elon Musk, Professor Stephen Hawking, Apple co-founder Steve Wozniak and Professor Stuart Russell.
Professor Russell is an award-winning AI researcher and author of leading AI textbook Artificial Intelligence: A Modern Approach, who recently likened the dangers of Artificial Intelligence to the dangers of nuclear weapons.
The letter warns that pushing ahead with LAWS will inevitably lead to an arms race that puts the technology in the hands of every army, terrorist and despot.
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow ... ubiquitous and cheap for all significant military powers to mass-produce.
Some people think that a battlefield populated by robots might actually be good for humanity because a battle between robots would reduce human casualties.
Sadly battlefields don’t often look like that.
Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.
The best answer, the group concludes, is an outright ban.
Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
You might think that trying to ban an entire class of weapon systems is a futile gesture but it’s not nearly as unthinkable as it might seem.
The Biological Weapons Convention bans the production and stockpiling of biological weapons and has been in force since 1975. A similar convention has been in force for chemical weapons since 1997.
In the same year, 162 countries signed the Anti-Personnel Mine Ban Convention and a year later the United Nations extended its Convention on Chemical Weapons to include a protocol that outlawed blinding lasers.
The protocol was significant because it banned a weapon system that didn’t yet exist but whose arrival many felt was imminent and inevitable. A point noted by the International Committee of the Red Cross:
The prohibition, in advance, of the use of an abhorrent new weapon the production and proliferation of which appeared imminent is an historic step for humanity. It represents the first time since 1868, when the use of exploding bullets was banned, that a weapon of military interest has been banned before its use on the battlefield...
Our new robot overlords are coming. It may be inevitable but that doesn’t mean we have to put out a welcome mat.
Image of X-47B for the U.S. Air Force by Rob Densmore is in the public domain.
jkwilborn
Seems to me, a ban on bio weapons has been violated by the US for decades. Seems a difficult concept, when the ban is world wide and we have them, still…
Jack
Curious...
Examples, please?
4caster
I am unaware of any deployment of biological weapons by the US, let alone the use pf them. The US and several other nations have researched defences against biological weapons, but that is a different matter.
Coyote
‘Concerns about the singularity are very real but it’s a fuzzy, far off risk that, for some at least, is easily dismissed because it sounds not unlike a sci-fi movie starring Arnold Schwarzenegger.’
Reality is that many of the things humans take for granted to today were once science fiction – it’s just that humans tend to ignore history and don’t really care about details where details actually matter.
I have no comment on the other part other than it being more evidence that mankind has an inherent belief it can do whatever it wants with no remorse and seemingly impunity – including destroying lives.