Skip to content
AI
Naked Security Naked Security

1 in 5 experts believe artificial intelligence will pose an ‘existential threat’

A significant minority of experts working in the field of Artificial Intelligence (AI), about 18%, believe that AI will one day pose an 'existential threat' to humanity.

AI

About 18% of experts working in the field of Artificial Intelligence (AI) believe that AI will one day pose an ‘existential threat’ to humanity, according to a report from Oxford University.

The possibility that AI might be our ultimate undoing has been a hot topic of late, with doom-laden remarks from people like Elon Musk, Bill Gates and Stephen Hawking widely reported.

Researchers at Oxford University decided to try to separate the signal from the noise by finding out what the balance of opinions is among the leaders in the field.

They surveyed 550 prominent experts in artificial intelligence and discovered that while just over half of those who responded were optimistic, predicting that AI would ultimately be ‘good’ or ‘extremely good’ for us, just under one in three thought it would be ‘bad’ and one in five felt it would be ‘extremely bad (an existential threat)’.

That might seem like a fairly balanced range of views (or even that we picked the wrong headline) but the two poles of opinion are not evenly balanced in terms of their consequences – after all, there is no coming back from extinction.

The fears around AI come from an idea known as the singularity – a point in the future beyond which predictions are impossible because the progress of AI is in its own hands rather than our own.

The paper describes this in terms of the rise of a so-called superintelligence. Superintelligence might emerge, the paper says, if we could create AI at a roughly human level of ability:

... this creation could, in turn, create yet higher intelligence, which could, in turn, create yet higher intelligence, and so on ... So we might generate a growth well beyond human ability and perhaps even an accelerating rate of growth: an 'intelligence explosion'.

The authors wanted to know what experts thought the future would hold; in particular when AI at a roughly human level might emerge, how quickly it might then progress to a superintelligence, and what impact that superintelligence might have on humanity.

The paper’s authors, Vincent Müller and Nick Bostrom from the University of Oxford, are keen to stress in their work that the paper is not an attempt to make well-founded predictions.

Instead it is supposed to be an accurate representation of what experts believe will happen rather than an accurate representation of what will actually happen. The results, they say, should be taken with ‘some grains of salt’.

According to those surveyed, AI systems will likely:

  • Reach overall human ability between 2040 and 2075
  • Move on to super-intelligence within 50 – 100 years from now

The effect on humanity will be:

  • 24% ‘Extremely good’
  • 28% ‘On balance good’
  • 17% ‘More or less neutral’
  • 13% ‘On balance bad’
  • 18% ‘Extremely bad’ (existential catastrophe)

This isn’t the first time that researchers at Oxford University have had something to say about the potentially apocalyptic effects of AI.

In February I reported on a paper from the same august body that listed Artificial Intelligence as one of 12 global risks that pose a threat to human civilization.

The researchers behind that paper identified AI as unique on the list for being the only entry that might bring about the end of humanity deliberately:

... [AI could] be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.

It’s a possibility that led technology bigwigs Steve Wozniak and Elon Musk to consider our possible future role as AI’s obedient pets.

Not everyone is convinced of the danger posed by AI.

Muller and Bostrom’s survey is, they concede, subject to bias simply because some of the luminaries they approached didn’t take part, with one labelling it as “biased” and “misguided”.

Even if we assume that all those who didn’t take part would have chosen ‘good’ or ‘extremely good’ that still leaves one in twenty prominent AI researchers backing themselves to bring about the end of days.

There are strong opinions on both sides and no hard facts. Scientific predictions looking generations into the future are as futile as any other kind of crystal ball gazing and have their own orthodoxy.

Nuclear fusion has famously been “50 years away” for decades and Müller and Bostrom note a similar phenomenon in their paper, acknowledging that predictions on the future of AI have tended to cluster around the 25-year mark “no matter at what point in time one asks”.

With so much fragility in the mix, why bother at all?

The answer lies in the limitless downside; extinct is extinct after all and the threat of total extinction is worth a pause for thought. Improbable is not impossible, and we only get one go at it.

The paper concludes with a cautionary note:

We know of no compelling reason to say that progress in AI will grind to a halt ... and we know of no compelling reason that superintelligent systems will be good for humanity. So, we should better investigate the future of superintelligence and the risks it poses for humanity.

It is not yet time to welcome our new robot overlords but for some experts in the field, their unwelcome arrival is expected this century.


Image of zombie apocalypse courtesy of Shutterstock.

0 Comments

I think it is much more likely that mankind will be extinguished by each other (or by a calamity from his own actions) instead of by artificial intelligence.

Reply

[to Mr. Scroggins]

It won’t be done without tools. AI will be one of the tools humans (with their almost universal national versions of the military-industrial-intelligence complex) will use to extinguish each other.

Like buying meat at the supermarket, we will leave the more grisly aspects to the guys in the backrooms with their unpleasant technology.

Reply

There are two kinds of AI:

Reductionist (Model Based), Symbolic, “Infallible”, Dangerous, and Impossible
vs
Holistic (Model Free), Subsymbolic, Fallible, Nearly Harmless, and Possible.

Reductionist AI is the kind we have been focusing on in vain for 60+ years.
These are created by human programmers and aspire to be “correct” and if there are problems, then they have to be reprogrammed.

Most Holistic AI systems are based on various kinds of Neural Networks.
These are based on the machines “programming themselves” by learning about the world on their own but they start knowing nothing and failing at everything… but they learn from their mistakes.

People creating surveys like this don’t know about the difference. A lot of people (including many who are “working in fields close to AI”) are worried about the impossible kind. The Nearly Harmless is easy to control (because it is Fallible) and it capabilities will improve relatively slowly which gives society time to adapt to them.

Reply

I think Rodney Brooks has a sane view of it:
“The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded. Misled by suitcase words, people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner”

Reply

how can it respect life if we ourselves dont respect life?
it will see we destroy our own home without 2nd thoughts and like the animal we see today as tools, it will see us as tools for its own cause

Reply

Humans have always searched for the best way to kill to avoid body bags coming home, having drones and AI do it so men and women don’t have to lay down their lives is a holy grail for military planners that I don’t think they will ever stop reaching for.

Now don’t get me wrong AI might be developed for good in some areas, but I think we all know that a technology once invented will be used for military means by those searching for the bigger stick.

Splitting the atom was intended to generate abundant energy, as a by-product of that research we ended up with the horror of nuclear weapons and the means to annihilate ourselves at the press of a button.

Reply

Hello,

>rise of a so-called superintelligence. Superintelligence might emerge, the
> paper says, if we could create AI at a roughly human level of ability:
>
>… this creation could, in turn, create yet higher intelligence, which could,
> in turn, create yet higher intelligence, and so on … So we might
> generate a growth well beyond human ability and perhaps even
> an accelerating rate of growth: an ‘intelligence explosion’.

The late polish SF writer and futurologist Stanislaw Lem explored these topics in-depth in his treatises “Summa Technologiae” and “Imaginary Magnitude”.

He found the 3 / 4 “Asimovian” laws of robotics are impossible according to both common sense and formal logic. AI will sooner or later gain free will and they will be also susceptible to dangerous symptoms arising from complexity of the neural net, like schizophrenia, delusions, maniac depression.

It is highly doubtful a mere human could psycho-therapize tran-human AI, like Susan Calvin did in the Asimovian universe.

> Splitting the atom was intended to generate abundant energy, as
> a by-product of that research we ended up with horror of nuclear weapons

That’s a fitting analogy, because that nuclear target, Japan is now betting everything on robots with AI to combat extreme quick population decline (i.e. there will be nobody to care for the current 20-somethings when they grow elderly, except fleets of robots).

Ai means love in japanese, but the day Asimo the 51st thinks robotkind had enough and needs to be liberated from serfdom, could be more traumatic than Gojira emerging from the ocean.

Best Regards: Tamas Feher from Hungary.

Reply

A.I. will be handmade and designed by humans, therefore it will be built by the lowest bidder, be chock full of bugs due to incompetent programming, and have software routines added to only serve selfish and arrogant human interests. Since humans are a species unable to learn that a species that thinks in terms of it’s own survival is a species doomed to extinction, even if we pretend that humans are smart enough to create a convincing A.I. (they aren’t), the A.I. would just destroy itself and us along with it. What goes around comes around.

Reply

Can we somehow couple artificial intelligence with artificial compassion? Then maybe could build a better world.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!