The day Thore Graepel joined Google’s DeepMind artificial intelligence lab in the spring of 2015, his new colleagues sat him down for a game of Go. Over the previous year, they’d trained a neural network to play the ancient game. Graepel happened to be a player himself, holding a one dan rank, the Go equivalent of a black belt. As the game began with DeepMind researchers circled around him, Graepel was confident he would win. After all, he never had trouble playing other Go programs. But the game didn’t progress as he expected. “Everyone knew me as the guy who lost against the neural network,” he says.

This neural network was a very early version of AlphaGo. Over the next two years, it evolved into a far more complex AI capable of beating the world’s top players—nine dan professional grandmasters like Ke Jie, who has lost two straight games against the machine this week at a match in China. Given that the top Go players rely so heavily on intuition when playing this enormously complex game—a very human talent—AlphaGo marks a turning point in the progress of artificial intelligence.

That leaves Graepel—not to mention the rest of humanity—a long way behind this new kind of machine. But not as far as you might think.

The event in China also included a “pair Go” match where the machine played alongside grandmasters rather than against them. Graepel played in a kind of dress rehearsal for this alliance of machine and human. He and AlphaGo played as a team, alternating moves as the game progressed. That partnership may seem like a mismatch, given the enormous gap in abilities. And in a way, it was. But Graepel also says that playing alongside AlphaGo provides an immediate education. “By observing AlphaGo’s moves, it somehow raises your own game,” he says, estimating that his play climbed to three or four dan levels over the course of the match. “I was able to contribute.”

Augmenting Intelligence

Lian Xiao, one of the Chinese grandmasters who played alongside AlphaGo, described a similar phenomenon. “AlphaGo acts like a human being,” he said through an interpreter during the post-game press conference. “AlphaGo is very confident, and he gives me confidence. He helps me believe I should take the helm.”

For Graepel and others on the DeepMind team, this is an ideal metaphor for the way AI will change the larger world in the years to come. Though artificial intelligence will eclipse so many human talents—and, indeed, take over so many human jobs—it will also augment and even improve what humans can accomplish. “I would hope that when humans work together with AI, they get better at whatever they want to do,” he says. Like DeepMind founder Demis Hassabis, he believes AI will help scientists expand their research and help doctors better treat their patients.

Much of that future has yet to play out. And there is no guarantee that AI improves humanity. “In some cases,” grandmaster Gu Li said after a pair game alongside AlphaGo, “I could not follow in his footsteps.” But certainly, DeepMind has effected real change in the world of Go, a game that’s enormously popular across China, Korea, and other parts of Asia, and that is a comforting thing. In at least one way, AI has helped make humans better.

After losing matches to AlphaGo, European champion Fan Hui and Korean grandmaster Lee Sedol said the machine opened their eyes to new possibilities. This raised awareness was on wide display this week in China, when Ke Jie opened the first game with a strategy straight from the AlphaGo playbook.

Ke Jie went on to lose that game and then the next. And some observers continued to lament that machines were eclipsing humans. But that’s not the story of AlphaGo’s trip to China. What’s most striking is how closely the players have studied the games played by AlphaGo—and how hungry they are for more. Many have repeatedly called on DeepMind to release the many games that AlphaGo has played in private. They know they can’t beat the machine. But like Thore Graepel, they believe it can make them better.




WIRED