Skip to content
Artificial Intelligence
Naked Security Naked Security

Artificial Intelligence expert likens AI dangers to nuclear weapons

Stuart Russell is an award-winning AI researcher and author who says we need to be as careful with AI as we are with nukes.

Artificial IntelligenceStuart Russell is an award-winning Artificial Intelligence (AI) researcher, a Professor of Computer Science at the University of California and author of the leading AI textbook Artificial Intelligence: A Modern Approach.

In other words, he’s a man who knows a thing or two about AI.

In a recent interview with Science, Professor Russell joined the chorus of experts and technologists who have broken cover this year to warn of the potential dangers of AI research, comparing it to the dangers posed by nuclear technology.

From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy" ... I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.

Both seem wonderful until one thinks of the possible risks.

The fundamental risk is, he says:

...explicit or implicit value misalignment - AI systems given objectives that don't take into account all the elements that humans care about.

That misalignment could result from any number of scenarios such as competition between companies or countries seeking a super-technological advantage or from a less obvious, no less dangerous, ‘slow-boiled frog‘ evolution that leaves us dependent and enfeebled.

Super-intelligent AI is probably decades away (if it’s coming at all) but Russell and others in the field want us to set our course correctly now, before any trouble starts.

The professor is one of hundreds of AI experts who signed an open letter in January calling for research to “maximize the societal benefit of AI” rather than simply pursue what’s possible.

The letter’s signatories aren’t the only ones to speak up about where unchecked AI might lead us either.

The founder of SpaceX and Tesla Motors, Elon Musk, famously likened AI research to “summoning the demon”, and is funding a number of projects aimed at delivering the kind of AI that Professor Russell wants to see.

Bill Gates is worried too, saying that “[I] don’t understand why people are not concerned”.

Professor Stephen Hawking has gone even further, warning that “the development of full artificial intelligence could spell the end of the human race”, sentiments echoed by researchers at Oxford University.

Not everyone agrees though. Linux founder Linus Torvalds is having none of it, recently describing fears about AI as bad sci-fi:

Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.

It’s an attitude that doesn’t impress the professor.

To those who say, well, we may never get to human-level or superintelligent AI, I would reply: It's like driving straight toward a cliff and saying, "Let's hope I run out of gas soon!"

Like Musk and others, Professor Russell wants to see human values and objectives at the centre of the development of AI technology. Students, he urges, should be trained to consider them in the same way that nuclear fusion researchers regard containment.

It’s an attitude that we’re going to need sooner than you might think, because while AI that exceeds human intelligence is on the far horizon at the moment, there are other dangers from AI that are all but with us now.

In April, professor Russell spoke at the United Nations in Geneva during a meeting considering the future of Lethal Autonomous Weapons Systems (LAWS).

LAWS are robots (likely drones to begin with) that can acquire and destroy targets without human oversight. The technology that enables them to do that is the AI that’s with us already.

The UN has started a process that could result in LAWS being banned before they arrive.

Unlike the attempts to head off the super-intelligent robot overlords of the future though, they don’t have much time at all.

Image of robot lady courtesy of Shutterstock.

9 Comments

I think a far more interesting thought is to use AI tech interfacing with a human and creating a superior human.

Why so the NSA can put a back door into that too and have the ability to read your thoughts/control/kill switch……

This whole thing is like debating who’s going to own a square mile of land on Mars.

I agree with Linus. This is a pretty dumb debate. As far as lethal autonomous drones, it’s not like they suddenly appeared out of nowhere and no one is responsible for them killing people. Humans own these weapons, humans program them, and humans operate them. How is this any different than someone releasing a pack hungry wolves in the middle of New York City? Would people seriously blame the wolves for killing people and not the person who released them?

I think the technical people are bringing this topic up just for the sake of getting funding to do “research”. A lot of non-technical people are lobbying to stop these drones, but the reality of it is that there’s always a human behind a machine, even if the machine “makes its own choices” or even changes its own logic. If someone hacks these devices, guess who’s still responsible; the people who own and operate them.

“How is this any different than someone releasing a pack hungry wolves in the middle of New York City?”
The debate is about whether it’s a good idea to use autonomous weapons. Since you appear to think it’s a bad idea to release a pack of hungry wolves, it must be that you agree it’s a bad idea to deploy autonomous weapons. After all, DARPA has described its new weapons as “hunting like packs of wolves”.

“I think the technical people are bringing this topic up just for the sake of getting funding to do “research”. ”
This has to be the weirdest argument of the week. The amount of research funding available for *designing* autonomous weapons is almost infinite. How much research funding do you think the US government provides to those who want to prevent their use??? I’ll give you a hint. It begins with 0 and ends with 0.

Laws against LAWS? You gotta be kiddin’!

People really believe that passing a law against some activity prevents it.

That’s delusional. Laws only appear to work, because most people voluntarily comply.

Try to imagine this scenario: a honcho at DARPA advises, “We can’t do THAT – THAT would be against the law.” Yeah, sure.

MKULTRA and COINTELPRO broke a whole shopping list of laws. That didn’t stop the CIA and the FBI, though, did it?

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?