Researchers at Oxford University have produced the first list of global risks ‘that pose a threat to human civilisation, or even possibly to all human life.’
The report focuses on risks with ‘impacts that for all practical purposes can be called infinite’.
Artificial Intelligence (AI) is on it.
And while human extinction might be a horrific, accidental side effect of climate change, a metorite impact or a super volcano, the report warns that AI might decide to cause our extinction deliberately (my emphasis):
...extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.
And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.
AI is included, along with nanotechnology and synthetic biology, in a category of emerging risks. The emerging risks are poorly understood but also have the potential to solve many of the other problems on the list.
The threat of AI comes from its potential to run away from us – it’s just possible that AI will end up working on itself and evolve beyond our understanding and control.
At which point we’d better hope it likes us.
Oxford University isn’t the first to draw attention to the potential threat posed by super-intelligent computers.
Elon Musk, the man behind PayPal, Tesla Motors and SpaceX, has warned about the dangers of AI repeatedly. Musk has described it as ‘our biggest existential threat‘ and taken on investments in AI companies just so that he can keep a close eye on what’s going on.
Speaking to students at MIT (Massachusetts Institute of Technology) he likened it to a demon that, once summoned, won’t be controllable:
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like - yeah, he’s sure he can control the demon. Doesn’t work out
Bill Gates backed up Musk’s concerns during an ‘ask me anything’ session on Reddit:
First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.
A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
If two of history’s most successful technology entrepreneurs aren’t persuasive enough for you, how about the man they call The Greatest Living Physicist?
Stephen Hawking, whose speech synthesiser uses a basic form of AI, isn’t a man with a lot of words to spare and when he spoke to the BBC about AI he was characteristically terse:
The development of full artificial intelligence could spell the end of the human race.
Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.
Musk and Hawking are also two names among a veritable who’s who of AI luminaries who recently signed an open letter calling for research priorities focused on maximising the societal benefit of AI.
What all these very intelligent people are reflecting is that we simply can’t predict how AI is going to develop, not least because AI might be a key tool in the development of AI.
Perhaps the only sensible place to start then is to figure out a way of keeping a close eye on just exactly what is going on.
Musk has his investments but computer scientist Eric Horvitz is thinking bigger.
Horvitz has teamed up with Russ Altman, a professor of bioengineering and computer science at Stanford, to create AI100 – a 100-year study into Artificial Intelligence.
Horvitz and Altman will join five others on a committee that will commission studies into how developments in AI will affects all aspects of human life over a number of generations.
The committee members will obviously change over time but the committee itself, and the host, are planning to stick around and keep a close eye on things.
If your goal is to create a process that looks ahead 30 to 50 to 70 years, it's not altogether clear what artificial intelligence will mean, or how you would study it ... But it's a pretty good bet that Stanford will be around
One of the many things that AI100 will look at is the loss of control of AI systems and whether or not that could give rise to the kind of dystopic outcomes that the Oxford University researchers are trying to focus attention on.
I can’t help wondering though; if we could look 100 years into the future and witness the final meeting of the AI100 committee, will anyone on it be human?
Image of Background with binary code and face courtesy of Shutterstock.