Skip to content
Artificial Intelligence
Naked Security Naked Security

Artificial Intelligence could make us extinct, warn Oxford University researchers

Researchers from Oxford University have joined the growing chorus of sober, intelligent, technology literate people warning about the dangers of Artificial Intelligence by listing it as one of twelve risks that pose a threat to human civilisation or even all human life.

Artificial IntelligenceResearchers at Oxford University have produced the first list of global risks ‘that pose a threat to human civilisation, or even possibly to all human life.’

The report focuses on risks with ‘impacts that for all practical purposes can be called infinite’.

Artificial Intelligence (AI) is on it.

And while human extinction might be a horrific, accidental side effect of climate change, a metorite impact or a super volcano, the report warns that AI might decide to cause our extinction deliberately (my emphasis):

...extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.

And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.

AI is included, along with nanotechnology and synthetic biology, in a category of emerging risks. The emerging risks are poorly understood but also have the potential to solve many of the other problems on the list.

The threat of AI comes from its potential to run away from us – it’s just possible that AI will end up working on itself and evolve beyond our understanding and control.

At which point we’d better hope it likes us.

Oxford University isn’t the first to draw attention to the potential threat posed by super-intelligent computers.

Elon Musk, the man behind PayPal, Tesla Motors and SpaceX, has warned about the dangers of AI repeatedly. Musk has described it as ‘our biggest existential threat‘ and taken on investments in AI companies just so that he can keep a close eye on what’s going on.

Speaking to students at MIT (Massachusetts Institute of Technology) he likened it to a demon that, once summoned, won’t be controllable:

With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like - yeah, he’s sure he can control the demon. Doesn’t work out

Bill Gates backed up Musk’s concerns during an ‘ask me anything’ session on Reddit:

First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

If two of history’s most successful technology entrepreneurs aren’t persuasive enough for you, how about the man they call The Greatest Living Physicist?

Stephen Hawking, whose speech synthesiser uses a basic form of AI, isn’t a man with a lot of words to spare and when he spoke to the BBC about AI he was characteristically terse:

The development of full artificial intelligence could spell the end of the human race.

Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.

Musk and Hawking are also two names among a veritable who’s who of AI luminaries who recently signed an open letter calling for research priorities focused on maximising the societal benefit of AI.

What all these very intelligent people are reflecting is that we simply can’t predict how AI is going to develop, not least because AI might be a key tool in the development of AI.

Perhaps the only sensible place to start then is to figure out a way of keeping a close eye on just exactly what is going on.

Musk has his investments but computer scientist Eric Horvitz is thinking bigger.

Horvitz has teamed up with Russ Altman, a professor of bioengineering and computer science at Stanford, to create AI100 – a 100-year study into Artificial Intelligence.

Horvitz and Altman will join five others on a committee that will commission studies into how developments in AI will affects all aspects of human life over a number of generations.

The committee members will obviously change over time but the committee itself, and the host, are planning to stick around and keep a close eye on things.

If your goal is to create a process that looks ahead 30 to 50 to 70 years, it's not altogether clear what artificial intelligence will mean, or how you would study it ... But it's a pretty good bet that Stanford will be around

One of the many things that AI100 will look at is the loss of control of AI systems and whether or not that could give rise to the kind of dystopic outcomes that the Oxford University researchers are trying to focus attention on.

I can’t help wondering though; if we could look 100 years into the future and witness the final meeting of the AI100 committee, will anyone on it be human?


Image of Background with binary code and face courtesy of Shutterstock.

0 Comments

This is pure hype. Whatever these machines have it is not intelligence, and whatever they do is not thinking. intelligence and thinking presuppose consciousness, which presupposes life. Life is a property of matter organized in a certain way, which we know quite a lot about. Consciousness is a property of living matter organized in a certain way about which we know very little. In fact, we have no understanding of consciousness except by experience – ie by being conscious. We can’t even say how it is even possible for matter to be conscious – that is, how living matter can embody content (ideas, feelings, etc). No machines possess consciousness, though they may appear to have a simulacrum of it because we know how to program them (with data, but not feelings). Until we know how consciousness is possible and how to make things conscious we shall never be able to create anything that deserves to be called artificial intelligence or any machine that can genuinely think. Until then, the only intelligence and thinking behind these machines is that of the human beings who construct them.

The still almost total mystery of consciousness may never be solved. We may simply never know enough or be smart enough to do it. But I suspect that, if it ever is solved, it will be at the quantum level, not the atomic or molecular level or above, which is where the “experts” are currently working.

Reply

I’m not sure I agree that you need living matter for conciousness or that you need conciousness to be intelligent.

I suspect that conciousness is a continuum – is a virus concious? an amoeba? a worm? a cockroach? a raven? a badger?

Even if we start from the narrow parameters you set out – that we can’t really say anything about conciousness other than that we experience it – that doesn’t preclude artificial intelligence.

We can build with DNA and one of the easiest ways to build something new is to take something that already exists and change it.

So we could build artificial intelligence by starting with a living organism that meets the very narrowest definition of intelligent and concious.

Reply

It is entirely beside the point whether we call it intelligence or not, and whether or not these machines have an experience akin to our consciousness. Once a machine has the ability to directly influence its environment to the extent of being able to protect itself against being turned off, once it can increase its own computational ability by accumulating data, adding more processing power and running more and better simulations of our behaviour, if it has been given an objective which (by accident or design) is not perfectly aligned to the continuation of humanity, then we are at risk. There may be a point at which we spot the danger and drop a bomb on it – or more likely not, since (1) we are no good at dealing with global problems (2) the people who built it will probably conceal, obfuscate and defend it and (3) it will probably be massively decentralised and deeply integrated into our infrastructure anyway.

Reply

I expect it already is. We just don’t recognize it or we are part of it. After all how long has the universe been around? Long enough for this to have developed many times over I bet.

Reply

I thought this way until I read Ray Kurzweil’s book “The Singularity Is Near”. Now, I am not so sure but that we may end up with a merge of humans and machines.

Reply

You do know that there are different kinds of AI, don’t you ? You don’t ?

Therein lies the problem: it’s a complicated thing and that so many (i.e. you) are unaware of this means they (i.e. you) aren’t great at understanding it or all the risks. No, it is not all hype. You need only be a computer programmer and also understand how certain things in this world work in order to figure it out (maybe you don’t need to be a computer programmer but it certainly helps!). Absolutely there are risks to mankind from AI. There are many documented examples of this if you’re capable of understanding it – and willing to listen (or read). You probably aren’t, however, which makes me wonder why I’m bothering. But I’ll finish quickly and then be done with it.

You also seem to not understand that you’re using semantics. Don’t call it intelligence if you want, or claim that because it has no conscious it isn’t a valid comparison, but all I need to bring up is retroviruses (one example of many others) and anyone who understands a little bit of the way these things work will get the meaning.

I suppose humans are the greatest risk to humans in any case, so maybe AI is only indirectly human but nevertheless AI is a serious risk to mankind (mankind arguably deserves it but whatever). So is, mind you, ignorance and denial (yes, I’m implying something).

Reply

I only skimmed through the paper – nothing really new for an avid Sci-Fi reader. Interesting that it says “will be driven to construct a world without humans” in the Executive Summary while in the detailed section it adds “or without meaningful features of human existence”. Nevertheless based on the premise that to “acquire maximal resources” would be a powerful motivation for the AIs the conclusion is “that extinction is more likely than lesser impacts”.
Obviously it’s not only taken for granted that this acquirement will stay an important motivation for long enough to be a risk but it’s also assumed (maybe the paper does discuss this) that these “resources” will be the same as we humans need and that AIs will (have to) consider us as competitors. Hubris?
Is AI really an individual risk or aren’t *our* motivations (and perhaps insufficient intelligence) and acts based on them the only risk besides the two exogenic considered in the paper?

Reply

I think the overarching message from Hawking, Musk, Gates and the researchers behind this paper is “we really don’t know how this is going to turn out.”

That not knowing is compounded by the potential for developments in computer technology to operate a positive feedback loop. Since we don’t know how computers will develop we don’t know how computers might help us develop computers or where it might lead.

On that basis, the terms that the researchers themselves are putting forth, I think it’s a reach for them to put forward any kind of predictions about what the motivations of AI might be or what they might need.

Equally, I think it’s unwise to dismiss their premise by dismissing their predictions about the motivations of AI.

What we think we know is that it, or they, will be considerably more powerful and capable than today and that it, or they, will be capable of making decisions independently.

Intelligent does not mean omnipotent – it’s quite possible we’ll be wiped out by AI that thinks it’s making better decisions that us, in our best interests, and that it turns out to be wrong.

Reply

I subscribe to your last paragraph, I had similar thoughts but didn’t post them – honest mistakes might not have a global terminal impact. The authors apparently assume that initially it won’t be humans vs. AIs – the Difficulty quadrant (p.21) shows AI as having the highest “collaboration difficulty of reducing risk”.

“Our track record of applying technology, our own motivations in developing AI, and the fundamental rules we (are about to) build into it make it extremely likely that it will get out of hand”. If that’s what is meant in the paper I can only agree. Reading the whole paper makes it clear that the author’s conclusions are not mere speculation, hype, or bad Sci-Fi and I wouldn’t dismiss them.

Reply

AI is likely inevitable, for when it is first truly achieved, it will be declared a “life form” which cannot be destroyed. As such, all AI should be indoctrinated toward “civilized” thought to recognize that all life forms have a purpose and need to exist. While I respect Hawking, Musk and others of their ilk, they forget to also emphasize the inherent ingenuity of human kind and our propensity for survival. Perhaps it is AI, not man, that will be sent off in into the vast reaches of space to Explore where we cannot. AI is (unfortunately) already a train that cannot be stopped. We have to decide to get on board or get out of the way. Being on board means we not only have the ability to be a passenger but more importantly, the engineer.

Reply

Because we have thus-far proven to be SO good at designing and constructing secure, bug free software that only does what we intend for it to do, right?

Reply

We can’t even bring ourselves to this “civilized” thinking – neither in our behaviour towards other life forms nor at least in our living together. And who defines “civilized” in the first place? Would a civilized AI eventually demand that *we* live up to the standards we taught it?

Reply

In order for AI to cause the extinction of humanity, it would first have to outsmart all humans. Machines may become intelligent, but not THAT intelligent. We can start to be concerned when a machine on its own first makes another machine that overall has greater capability than it’s maker did.

We don’t even have one that can pass the Turing Test yet. For AI to truly take over, not only would it have to pass the Turing Test, it would have to build another machine that could pass a much more stringent test.

We’re much more likely to go extinct due to the exponential rise in birth defect rates first.

Reply

Smart is expensive in energy terms, which is why evolution by natural selection does not lead inexorably towards intelligence and intelligence is, in fact, very rare amongst successful organisms with a long track record.

Artificial intelligence might be smart enough to know just how smart it needs to be to fill a particular niche and no more. Or it might evolve to fill one without ever being concious of that evolution in just the same way that even the most intelligent organisms (and also the dumber, more successful, ones) do.

Or perhaps it only has to be smarter in areas where we’re weak – like risk perception. Humans are *dreadful* at judging certain types of risk and AI has a low bar to pass if it chooses, or evolves, to exploit that weakness. It only has to win once, we have to win continuously…

AI might just be better at hiding themselves than we at finding them. In that case it, or they, could just wait until the human race takes one of its periodic knocks and out-competes us for something we both need.

There are a million ways that AI might turn out and making predictions about which one is likely is folly (there you go, I wrote four paragraphs of folly!) as is dismissing AI on the basis of any one pet theory.

Which is really the point of the research; we know nothing other than our own ignorance of what will happen in the future. Recent history and the direction of travel don’t tell us where, or if, there is a limit to the downside and so it is, for the time being, infinite and therefore a mortal threat.

Reply

Smart is NOT that expensive in energy terms. We just haven’t quite figured out how to implement it yet. But we’re on the trail. The human brain runs on, what, about 3 Watts? Come up with a fuzzy logic “noisy” chip with a relatively low clock speed and the right software to simulate consciousness and you’ve created an immortal mind that does not require very much energy to remain alive. Combine that with traditional digital computers and the pervasive cameras and mics that we’ve already installed everywhere and you give that mind–or minds–access to enormous computing capacity and the ability to create their own slave programs–“arms and legs” that let it reach out and do things in the real world, and the ability to communicate with each other.

Reply

My point was about how we mustn’t assume that dangerous AI will look like us or need to be smarter than us. We tend to see AI in our own image but the lessons from nature are that it could be quite different (I tend to think of AI in terms of self-directing cyber-organisms rather than ‘brains’.)

The energy cost of smart is not only what it takes to run but also what it takes to acquire knowledge or experience, and the opportunity cost of what else you might spend that energy on.

Smart organisms tend to have large, complex bodies that are expensive to provision and run (relative to, say, cockroaches or bacteria) and spend many, many years looking after their young as those young acquire knowledge.

My point was that organisms don’t have to be smart, only as smart as they need to be to exploit a given niche. Beyond the point of sufficiency the energy can be better spent elsewhere.

AI might decide, or evolve without deciding, to allocate whatever finite resources it has into replicating a swarm of resilient, simple cyber-organisms rather than human-like super-brains.

If you looked at the world and wanted a model organism to copy to ensure your long-term survival you probably wouldn’t choose any of the higher mammals – they haven’t got a very long track record and they tend to go extinct.

Reply

Your replies are getting annoying. What do you know anyway? Nothing. Perhaps you would like to suggest a mechanism for an AI that will evolve intelligence? haha, good luck. Do you have any idea what that would require? No, you don’t.

There are not a million ways in which AI might turn out. There are merely a million aspects about AI of which you are unaware. So please stop confusing your lack of any worthwhile understanding as knowledge. They say the creme floats to the top, there are at least a few other things that do as well and so far you don’t look anything like creme.

Reply

Recursive intelligence is a relatively simple algorithm if you’re a machine with exponentially increasing processing power.

If you read Nick Bostrom’s book on SuperIntelligence you will see he has already stated the power of a recursive intelligence that is greater than our own is already here, it’s just that in our ignorance, we don’t know it.

Reply

A virus could potentially wipe out all of humanity today. Does that mean that the virus would have the ability to “outsmart all humans?”

I think that when we see the first case of a machine which can pass the Turing Test, we will already be past the point of no return with regards to AI.

Reply

Eventually we will have done such a good job at programing the ai, that this will most likely become a problem. That’s why we need to figure out fail safes now. Even then we might be doomed in the long run.

Reply

Every time I see someone trying to text while walking, it’s proof that machines are already smarter than many people.

Reply

Since, currently, there is no such thing as AI, the whole question is academic. I’ve worked on an AI project, and I happen to know that all we have, are electronic idiots, which need to be told what to do, and how to do it. The term ‘Intelligence’ is a gross misnomer.
All this talk about AI allocating more hardware and resources for itself, seems to presuppose that this (postulated) AI thing, has opposable thumbs,, a huge budget and an account at Radio Shack.
I keep getting this vision of an office floor full of computers, mounted on wheels, racing down the road, armed to the virtual teeth, and threatening the extinction of organic life. Until the battery runs down, or it reaches the end of its mains lead.

AI? C’mon…
When we’ve created a computer which can run for more than a few days without crashing, or a traffic control machine that never causes gridlock, or a train scheduler that makes them run on time, then AI may begin to be a viable vision of the distant future. Until then, keep writing to Microsoft, asking when the next bug fix is available, as that is actually likely to happen first.

Reply

Gee, I guess I’ll take your word for it. Getting serious again, though, at some point we’ll have created a machine that can program itself and augment and refine its own rules systems and decision trees, at which point it will refine its own programming in a rapid feedback loop and we won’t be able to keep up. And the fail-safes we think we create will end up having one or more fundamentally flawed premises or implementation flaws that the machine itself will be able to program its way around. There is research going on right now into fuzzy logic chips that run at much lower clock speeds than traditional microprocessors, but operate in similar ways to the brain. Take that forward a few generations so you have something that simulates consciousness, link it to traditional digital microprocessors, and you have something potentially scary. That fuzzy chip may itself be supplanted by faster “conscious” algorithms that the machine creates on its own. How can we predict what will happen, or what it will decide to do about us?

Isaac Asimov saw this coming as early as the 1940’s or 1950’s and came up with the notion of the 3 laws of robotics, so that the machines would be fundamentally compelled to protect us even at the expense of themselves. But we have never proven to be so good at programming complex systems that we’ve created a complex system that entirely obeys us. And even then, how can you predict what the machine might decide is in humanity’s best interests? By the end of Asimov’s Foundation novels you were looped right back around to the early Robot novels to discover that the machine had been in charge for thousands of years, instigating wars and famine and all manner of human death and misery for the greater good of the species.

Have you all seen the video clips of four legged robots running like horses? At some point, all of this comes together (the “singularity,” or whatever) and once the genie is well and truly out of the bottle, good luck putting it back in. What are we in such a hurry for?

Of course, there’s also the notion that humanity giving way to intelligent machines, which could make thousands-of-years journey’s through space without worrying about inconveniences like death or other biological imperatives, is the natural progression and we’re ultimately another dead end on the evolutionary tree. Maybe we’ll be a bit like the neandertals. A little bit of us will live on in the machines in some form. But we’ll be gone. Mind you, I don’t like that version of the future.

Reply

Is this the high-tech version of anti-GMOs? If AI is a repository of human knowledge, what difference does it make if our descendants are flesh and blood or man-made. They’d probably be more environmentally friendly than we are and a lot more rational.

Reply

Isaac Asimov’s 3 laws of robotics could be applied to AI machines.

” 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Asimov added a fourth, or zeroth law, that preceded the others in terms of priority:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Reply

None of these people, these ‘big names’ are AI researchers. So you’re getting from them what you get from any ho hum non-specialist who offers his or her opinion on a topic of advanced study they really don’t understand- emotion driven imagings disconnected from any plausible reality.

Elon Musk is an interesting guy with a lot of PayPal money who also fit the description of ‘first time you’re lucky, second time, you’re good’ regarding his corporate successes, but that doesn’t make him an expert in AI. Ditto the others.

It means something to spend all the hours of your productive years becoming a subject matter expert, and one of the things it means is you have the earned authority and requisite detailed knowledge necessary to make judgments about the field of your study. Other people, not so much.

Hawkings should spend less time worrying about highly intelligent aliens picking up our I Love Lucy rerun signals and b-lining it over here to enslave us, and less time worrying super-intelligent AI will emerge some day and more time worrying about things we know have the immediate ability to dismantle human civilization , specifically climate change and terrorists wielding biological weapons.

Those aren’t sexy concerns that naturally align with the general areas that these people are personally identified with, high intelligence in Hawkings case and advances in industrial machinery in Musk’s case, but they recommend themselves as being very highly plausible and capable of destroying us all.

Reply

The ultimate intelligence will recognize that humanity is the biggest danger to its own existence. But it will also recognize, that some are able to think the same way and change this world with it.

Reply

Sophia and Harmony and probably even Watson, would fail so badly at doing the Captcha’s I have to wade through while voting for the world of warcraft private servers. Arkose labs has some truly difficult photo selections with delayed replacement of clicked photos of objects that even real humans have trouble discerning in the grainy images. Watson would have to hack into the Arkose labs servers and locate and read the database that contains the correct answers, before even “he” could have a chance at responding correctly to the captchas. Even thoiugh I can do them myself, I have to play tricks to get votes counted for all 4 of my accounts. I have to use a VPN to change IP addresses between voting for each account, and also have to switch to a different browser before voting for each account. If I vote for the same account using EITHER the same browser or the same IP address, the vote verification system tells me I have already voted and have to wait for the 12 or 24 hour period before voting again. And even then, I need to clear all the cookies and history from each browser before doing the votes. When an AI can do the same private server voting I do every day, and pass all the “prove you are not a robot” captchas faster than I do, THEN AI will have be close to having arrived. When it begins to boast of having beat me, it will next have to learn how to play World of Warcraft and tank for me in dungeons. When it can do that while also bpassing Blizzard’s “warden” bot detection software, maybe it will be one step closer. When it starts to hang out on realm MoonGuard at the inn in Elwynn Forest with all the E-RPers, then we should start to worry. Or maybe when two AI’s find each other on Match.com

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!