Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations
KMSU is off the air in Mt. Grove (88.7FM) due to signal interference. We are working to restore coverage at the site. In the meantime, some Mt. Grove area listeners will be able to listen over the air to KSMU at 91.1 or KSMW at 90.3FM. Or stream KSMU anywhere from any device.

Google A.I. Computer Takes Second Game From Human Champion

South Korean Go player Lee Sedol reviews the match after resigning, giving Google's artificial intelligence program AlphaGo a two-game lead in their five-game series in Seoul, South Korea, Thursday.
Lee Jin-man
South Korean Go player Lee Sedol reviews the match after resigning, giving Google's artificial intelligence program AlphaGo a two-game lead in their five-game series in Seoul, South Korea, Thursday.

For the second time in as many days, Go champion Lee Sedol fished out one of the playing stones he'd captured from his opponent and placed it back on the board, admitting defeat against the computer program AlphaGo, which now has a 2-0 lead in their best-of-five series.

Lee and AlphaGo will now take a one-day break before continuing their match Saturday. Lee must win the next three games in order to win.

"Yesterday I was surprised, but today, more than that, I'm quite speechless," Lee said after the match, according to Go Game Guru. Lee, a multiple winner of world titles, then added, "... there was not a moment in time where I felt that I was leading the game."

When asked later about AlphaGo's weaknesses in the board game in which players compete for territory, Sedol answered that he had yet to find any.

This game lacked some of the hard-hitting exchanges of the first, with several analysts agreeing that in Round 2, Lee took a more cautious approach while AlphaGo showed more creativity than on Wednesday. Both sides used some of their allotted overtime, after going through their two-hour periods in the second game, a contest that lasted for 211 moves, compared with 186 in Game 1.

At one point, an unusual decision by AlphaGo, to place a stone near one of Lee's white pieces where little action seemed to be taking place, "made human champion Lee leave his seat to take a break, or likely to pull himself together after the unconventional move," the Korea Herald reports.

Michael Redmond, an elite Go player who's commenting on the games for the Google DeepMind channel on YouTube, said:

"I was impressed with AlphaGo's play. There was a great beauty to the opening. Based on what I had seen from its other games, AlphaGo was always strong in the end and middle game, but that was extended to the beginning game this time. It was a beautiful, innovative game."

The challenge between Lee and AlphaGo carries a prize of around $1 million, and it has spurred interest in Go, a game that was developed in China centuries ago and is now played by tens of millions of people.

Here's how the American Go Association describes the philosophy behind the game, which has fewer rules than chess but has many more potential variations for how pieces are arranged on the board:

"There is no simple procedure to turn a clear lead into a victory — only continued good play. The game rewards patience and balance over aggression and greed; the balance of influence and territory may shift many times in the course of a game, and a strong player must be prepared to be flexible but resolute. Go thinking seems more lateral than linear, less dependent on logical deduction, and more reliant on a 'feel' for the stones, a 'sense' of shape, a gestalt perception of the game."

Google DeepMind CEO Demis Hassabis, whose company created AlphaGo, called Thursday's game "excruciatingly nerve-wracking" — and he also said that more than 100 million people worldwide had watched the first match online.

As for AlphaGo, as NPR's Geoff Brumfiel reported earlier this year:

"AlphaGo is programmed using so-called deep-neural networks, which are inspired by biological brains. The networks have millions of neuron-like connections that AlphaGo can rearrange as it plays. In essence, the program reprograms itself in order to 'learn' the optimum strategy. Similar networks have proven remarkably effective in recent years at learning tasks such as recognizing objects in photos."

Copyright 2021 NPR. To see more, visit

Bill Chappell is a writer and editor on the News Desk in the heart of NPR's newsroom in Washington, D.C.