Site icon Sophos News

Artificial Intelligence expert likens AI dangers to nuclear weapons

Artificial IntelligenceStuart Russell is an award-winning Artificial Intelligence (AI) researcher, a Professor of Computer Science at the University of California and author of the leading AI textbook Artificial Intelligence: A Modern Approach.

In other words, he’s a man who knows a thing or two about AI.

In a recent interview with Science, Professor Russell joined the chorus of experts and technologists who have broken cover this year to warn of the potential dangers of AI research, comparing it to the dangers posed by nuclear technology.

From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy" ... I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.

Both seem wonderful until one thinks of the possible risks.

The fundamental risk is, he says:

...explicit or implicit value misalignment - AI systems given objectives that don't take into account all the elements that humans care about.

That misalignment could result from any number of scenarios such as competition between companies or countries seeking a super-technological advantage or from a less obvious, no less dangerous, ‘slow-boiled frog‘ evolution that leaves us dependent and enfeebled.

Super-intelligent AI is probably decades away (if it’s coming at all) but Russell and others in the field want us to set our course correctly now, before any trouble starts.

The professor is one of hundreds of AI experts who signed an open letter in January calling for research to “maximize the societal benefit of AI” rather than simply pursue what’s possible.

The letter’s signatories aren’t the only ones to speak up about where unchecked AI might lead us either.

The founder of SpaceX and Tesla Motors, Elon Musk, famously likened AI research to “summoning the demon”, and is funding a number of projects aimed at delivering the kind of AI that Professor Russell wants to see.

Bill Gates is worried too, saying that “[I] don’t understand why people are not concerned”.

Professor Stephen Hawking has gone even further, warning that “the development of full artificial intelligence could spell the end of the human race”, sentiments echoed by researchers at Oxford University.

Not everyone agrees though. Linux founder Linus Torvalds is having none of it, recently describing fears about AI as bad sci-fi:

Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.

It’s an attitude that doesn’t impress the professor.

To those who say, well, we may never get to human-level or superintelligent AI, I would reply: It's like driving straight toward a cliff and saying, "Let's hope I run out of gas soon!"

Like Musk and others, Professor Russell wants to see human values and objectives at the centre of the development of AI technology. Students, he urges, should be trained to consider them in the same way that nuclear fusion researchers regard containment.

It’s an attitude that we’re going to need sooner than you might think, because while AI that exceeds human intelligence is on the far horizon at the moment, there are other dangers from AI that are all but with us now.

In April, professor Russell spoke at the United Nations in Geneva during a meeting considering the future of Lethal Autonomous Weapons Systems (LAWS).

LAWS are robots (likely drones to begin with) that can acquire and destroy targets without human oversight. The technology that enables them to do that is the AI that’s with us already.

The UN has started a process that could result in LAWS being banned before they arrive.

Unlike the attempts to head off the super-intelligent robot overlords of the future though, they don’t have much time at all.

Image of robot lady courtesy of Shutterstock.

Exit mobile version