Sophos News

Linux creator Linus Torvalds pooh-poohs fears over Artificial Intelligence

When the final history of humanity is written 2015 might turn out to be an important year.

Quite how it’s written will depend on who (or what) is writing it though, particularly if the author is a malevolent Artificial Intelligence of our own creation.

If we get to pen it ourselves then the middle of the 21st century’s second decade might go down as a time when lots of very, very clever people temporarily lost their minds.

2015 in particular could become known as the year when smart people took time out from pondering dark matter, landing on Mars and eradicating malaria to describe work on Artificial Intelligence (AI) as “summoning a demon“.

A demon that might keep us around to use as pets, if we’re lucky.

Or perhaps history will record that the drum beat of voices from Oxford to Stanford, and warnings from AI experts and STEM all-stars like Elon Musk, Bill Gates and Stephen Hawking were the first, faint drone of an early warning siren that kept us from the edge of the abyss.

The abyss in question is a theoretical point in the future known as the Technological Singularity where an advanced AI could become smart enough to reprogram itself, triggering an explosive self-improvement that takes it far beyond our control.

Fears about where AI research will lead us have attracted significant and noteworthy support lately but not everyone is convinced.

Linus Torvalds for one.

Linus, creator of the Linux kernel, the software that forms the beating heart of 2 billion devices from servers to smartphones, has joined the all-star cast of tech-somebodies with something to say about AI.

The famously blunt Finn declared to readers of Slashdot that everything’s going to be fine – we’ll always be smarter than our dishwashers:

We'll get AI, and it will almost certainly be through something very much like recurrent neural networks...

So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.

The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.

One of the people whose narcotic habits Linus is questioning is surely Elon Musk.

Musk is the billionaire founder of Tesla Motors and SpaceX and he’s on course to be remembered as either the Chicken Little or Joan of Arc of the AI story, depending on how things turn out.

He’s not just had a lot to say about the possible perils of AI, he’s actually funding 37 research teams via the FLI (Future of Life Institute) as part of a program aimed at “keeping AI robust and beneficial”.

Amongst other things, the projects will look into how we can keep the economic impacts of AI beneficial; how we can make AI systems explain their decisions; and how to keep the interests of super-intelligent systems aligned with human values.

The study, headed by Heather Roff, that will look at how to keep AI-driven weapons under “meaningful human control” might be the one we need first.

Whether the Singularity is science fiction or not the spectre of AI-driven drones that can make life and death decisions is upon us now.

The technology already exists – we already know we can do it, we just haven’t decided if we should.

Earlier this year, United Nations experts met for the second time to consider the future of Lethal Autonomous Weapons Systems (LAWS), a meeting that could be the prelude to a pre-emptive ban.

A decision that would make this a very memorable time indeed.


Image of Stylish wired cyber man courtesy of Shutterstock.