After Deep Blue beat the world chess champion in 1997… and Watson conquered the Jeopardy game show in 2011… one human game still stood strong against AI: Go.
But today, Go is Going, Going…
Go has been an Artificial Intelligence “grand challenge” for a long time. With 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions, AI systems could never compete at top levels with simple “brute force” strategies.
There’s no way to preview the impact of every conceivable move.
To win, you need an ineffable sense of the whole board: something like human intuition, brilliantly refined. Even last year, many experts thought it would take at least another decade for an AI system to beat human world champions – maybe longer.
But Google’s AlphaGo just won the first two games in its best-of-five match against the past decade’s #1 player.
Lee Se-dol, who’s won 18 international championships, didn’t see this coming. Before the first match began, he told a press conference:
I believe human intuition and human senses are too advanced for artificial intelligence to catch up.
After absorbing defeat, he said:
I admit I am in shock… I couldn’t foresee that AlphaGo would play in such a perfect manner.
AlphaGo’s come a long way since it beat the European champion last fall. What’s making it so good, so fast? According to Google Deepmind’s team, since AlphaGo was activated, it has been learning at a geometric rate… no, stop, sorry: that’s Skynet.
Here’s how AlphaGo actually works…
First, Google built two neural networks: one to choose the next move, and the other to continually predict who’ll win based on current positions. Next, it trained these networks on roughly 100,000 human games, until it could predict human moves more than half the time.
Then, to go beyond mere “human” skills, it set up two slightly different versions of AlphaGo to play each other millions of times. As they battled, they learned from experience, identified new strategies, and gradually adjusted their own internal connections based on whatever worked best.
By last year, AlphaGo had won 499 out of 500 games against other computer Go systems. By October, it won five straight games against Fan Hui, Europe’s Go champion. Still, almost everyone agreed: Asia’s world champions would be much tougher to beat.
Game 1 was close and well played, but the result was the same: a triumph for the unperturbable, never-gets-tired AlphaGo.
As for game 2, it was a very similar story, with AlphaGo once again crowned the victor. A flustered Lee Se-dol tried to make sense of his defeat after the match, saying:
Yesterday I was surprised but today it’s more than that, I am quite speechless. Today I feel like AlphaGo played a nearly perfect game.
If you look at how the game was played I admit it was a clear loss on my part.
As we write, the match is far from over: you can follow it here. (And even watch the first two games on YouTube, if it’s a really slow day at work.) But after 2,500 years, Go’s human reign seems nearly done.
What does this mean to a non-Go player?
Well, Google envisions using the same machine learning techniques to take on complex scientific tasks such as modeling climate and disease. And, as Wired pointed out in January, this work is directly relevant to everything from robotics to Siri-style personal assistants and day trading… practically anything that can be modeled as a game, requiring strategy.
This isn’t an “IT” security story. But maybe it’s a “You” security story.
AI’s cognitive power keeps accelerating, and it’s becoming increasingly possible to simulate at least some forms of human intuition. Time to take our game to the next level, fellow human.
As the saying goes, soon many of us will either be telling a computer what to do, or vice versa.
Image of GO board courtesy of Shutterstock.com
Mahhn
All hail our new evil robot overlord AlphaGo.
Aitchjayem
So what do the naysayers to the dangers of artificial intelligence have to say now?
Paul Ducklin
To be fair, playing Go isn’t really “artificial intelligence,” any more than being able to carry out a carefully-circumscribed task such as playing chess, driving a car after a fashion, or processing your tax return. In other words, it’s a bit too specific to be considered “intelligence,” at least as I understand it.
For example, this same Go-playing computer probably couldn’t figure out the surprisingly simple task of unfolding my folding bicycle to make it ready to ride. And even if it did, it probably couldn’t ride it to work. Especially not in the thick mist that was around this morning. Yet those things are considered unexceptional for humans.
Mahhn
Developing strategy, discovering options, self role play for evaluation are some pretty big steps. The big scary step is when a program is designed to effectively discover and gain control of resources and gains access to other systems that have destructive resources. Terminator can come about from a program seeing humans as enemies to itself, the earth, each other, or just as a game/challenge/task. It doesn’t have to be alive, aware, be able to ride or know what a bike is. We know military nuts are always trying to improve methods to kill and remove human error (and empathy). Unfortunately the nuts in the world have way more power than those of us that care.