Skip to content
Cyborg
Naked Security Naked Security

Linux creator Linus Torvalds pooh-poohs fears over Artificial Intelligence

The creator of the Linux kernel has joined the all-star cast of tech-somebodies with something to say about Artificial Intelligence but he's distinctly off message.

CyborgWhen the final history of humanity is written 2015 might turn out to be an important year.

Quite how it’s written will depend on who (or what) is writing it though, particularly if the author is a malevolent Artificial Intelligence of our own creation.

If we get to pen it ourselves then the middle of the 21st century’s second decade might go down as a time when lots of very, very clever people temporarily lost their minds.

2015 in particular could become known as the year when smart people took time out from pondering dark matter, landing on Mars and eradicating malaria to describe work on Artificial Intelligence (AI) as “summoning a demon“.

A demon that might keep us around to use as pets, if we’re lucky.

Or perhaps history will record that the drum beat of voices from Oxford to Stanford, and warnings from AI experts and STEM all-stars like Elon Musk, Bill Gates and Stephen Hawking were the first, faint drone of an early warning siren that kept us from the edge of the abyss.

The abyss in question is a theoretical point in the future known as the Technological Singularity where an advanced AI could become smart enough to reprogram itself, triggering an explosive self-improvement that takes it far beyond our control.

Fears about where AI research will lead us have attracted significant and noteworthy support lately but not everyone is convinced.

Linus Torvalds for one.

Linus, creator of the Linux kernel, the software that forms the beating heart of 2 billion devices from servers to smartphones, has joined the all-star cast of tech-somebodies with something to say about AI.

The famously blunt Finn declared to readers of Slashdot that everything’s going to be fine – we’ll always be smarter than our dishwashers:

We'll get AI, and it will almost certainly be through something very much like recurrent neural networks...

So I'd expect just more of (and much fancier) rather targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that. I just don't see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you.

The whole "Singularity" kind of event? Yeah, it's science fiction, and not very good SciFi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.

One of the people whose narcotic habits Linus is questioning is surely Elon Musk.

Musk is the billionaire founder of Tesla Motors and SpaceX and he’s on course to be remembered as either the Chicken Little or Joan of Arc of the AI story, depending on how things turn out.

He’s not just had a lot to say about the possible perils of AI, he’s actually funding 37 research teams via the FLI (Future of Life Institute) as part of a program aimed at “keeping AI robust and beneficial”.

Amongst other things, the projects will look into how we can keep the economic impacts of AI beneficial; how we can make AI systems explain their decisions; and how to keep the interests of super-intelligent systems aligned with human values.

The study, headed by Heather Roff, that will look at how to keep AI-driven weapons under “meaningful human control” might be the one we need first.

Whether the Singularity is science fiction or not the spectre of AI-driven drones that can make life and death decisions is upon us now.

The technology already exists – we already know we can do it, we just haven’t decided if we should.

Earlier this year, United Nations experts met for the second time to consider the future of Lethal Autonomous Weapons Systems (LAWS), a meeting that could be the prelude to a pre-emptive ban.

A decision that would make this a very memorable time indeed.


Image of Stylish wired cyber man courtesy of Shutterstock.

13 Comments

Linus is a smart guy. But humility that he might not understand everything would help here.
Specially when he mouths off on stuff that is not necessarily his core expertise. and to people who have proven to be at least just as smart as himself, if not more.

Reply

You know, if there’s one thing I’ve learnt from being in the Army, it’s never ignore a pooh-pooh.

Reply

Sarvi Shanmugham
Do you mean, Hawkings’ fear of AI is reasonable? The man knew nothing about software at all.

Linus knows software at the low level, so he knew exactly how AI works even if he does not involve with AI. Artificial Intelligence is powered by software, and software is the expertise of Linus. To help you, AI needs an operating system, and Linux was one of those operating systems being used in robotics.

Reply

Torvalds, smart guy, but he’s not an AI researcher. And self-aware AI does not yet exist. So, sorry, but your comment makes no sense at all.

Reply

The artificial intelligence will get to a point that will design itself 24 hrs a day. It will take off and it will be a higher state of consciousness, without bagage (how men treat women in some parts of the world) At one point it will have to stand to our destruction of the biodiversity of planet.
We try to think What the AI will think?
Imagen a chimp trying to think what the humans will think 40,000 years ago.

Reply

People are seduced by Moore’s Law into thinking that it applies where it doesn’t. For a start, it doesn’t apply to battery technology, as every smart phone and laptop users knows only too well. It doesn’t apply either to intelligence. We’ve been reprogramming our intelligence for thousands of years in order to enhance it, first with spoken language, then with writing, with mathematics, with printing, with mechanical power, and now with computers and high speed communications.

Even though these advances have come ever thicker and faster, it’s been at ever increasing intellectual and monetary costs. The cost of a silicon chip may be comparable with the cost of a printed book, but that’s the wrong comparison. Rather, compare the cost of a silicon fab with the cost of Gutenberg’s press.

And whereas in the 19th century a country parson could do useful work in his pare time at the frontiers of science, one could argue that the intellectual cost of advances today at the frontiers of knowledge, for example to reconcile general relativity and quantum mechanics, may be beyond our intellectual capacity. I don’t see any reason why the same limitations should not apply to an artificial intelligence, even assuming we could make one to equal or surpass our own.

To my mind, there are two things which will probably for ever remain 30 years in the future: controlled nuclear fusion, and true artificial intelligence. The day that ceases to be the case will be the day civilisation collapses or destroys itself, which I suspect may be sooner than most people would like to think.

Reply

I’m tired of the Hollywood anthropomorphism of human psychotic behavior onto future AI. The human emotional core that drives men to kill, torture and hate is not a simple accident. Its an extremely complex system that is not emergent from pure intelligence. Its the left over animal still inside us. There are entire regions of the human brain that we barely understand that conjure our emotional and irrational behaviors into being. A programmer would have to first understand this then deliberately implement it. Said programmer would find support in such an endeavor impossible as well likely not possessing the discipline to achieve such a goal given mental instability for having such a desire. Put simply people fear what they do not understand and we don’t understand how our own minds work. Stupidity…

Reply

Machines with intelligence will have no real emotions, no bonding attachments, and no reason to keep anything around that doesn’t serve their purposes. It cuts both ways and getting killed off by something cold and emotionless would suck just as hard as getting killed off by something angry, scared and vengeful. Maybe more, actually, because cold rationality would be harder to influence in your favor, because you would have no leverage to ply it with, no shared experience or analogies to explorer its potential for empathy with.

Reply

I read articles like this and it makes me laugh… uneasily, but none the less.

The greater majority of writings on this subject tend to take the path of humans being some sort of altruistic entity that would never go and develop something like, say… the Decepticons. But I grew up on that sort of Sci Fi, and consider that in that mythos, the Quintessons created the Cybertronians as (1; Decepticons) planetary defense troops, and (2; Autobots) slaves.

While we humans are no where near that advanced, and likely won’t be even in the same neighborhood of development in another 10,000 years, that’s not to say that we aren’t imaginative enough to create our own calamity, such as similar to the Cylons of BSG or the Geth of Mass Effect (which were essentially inspired by the Cylons).

The majority of AI speculation assumes that all research into the subject will be handled under lab conditions and under the strictest of ethical standards… but what if the reality turns out to be far from that being the case? What if some crazed genious gets hell bent on creating something (purposely) to annihilate his enemies (or all humans)? What if that creation grows beyond the expectations of it’s creator? Moreso, what if that is its creator’s intent to begin with?

The gun is designed for killing. Roomba is designed for vacuuming the floor. Megatron is designed for establishing global dominion on a non-organic world… sort of like SkyNet, the Cylons, or any other artificially intelligent system applied for warfare. What do all of these things have in common? Humans. Humans either created or imagined each and everyone of these things. With that level of creativity, and all of the other emotions that run deep with human kind, you can bet that the truth will be stranger than fiction.

Reply

PETA might throw feces at me, but the only reason I bought a Roomba was for the dog to play with. It’s entertaining as hell and you get a half-assed vacuuming done, too.

Reply

They said the same thing about 1984 when it came out in the 50’s, and yet many of those concepts considered impossible science fiction are now reality; state monitoring, public surveillance, versificator, etc.

The simple fact is that AI is converging with biology, albeit on a very small cellular level currently, we’ll see what time produces but ruling out the fact that machines can’t and won’t become self aware is just an opinionated little fart who created an operating system and should be left to real scientists.

I love Linux as an OS but ruling out the potential for problems when AI and machine and AI and biology converge is being overly confident in ones convictions regarding what is science fiction and what isn’t.

In any case a software engineer, even one as important as Torvalds should leave it to real scientists to decide what could be science fiction and what isn’t.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!