South Korea's Lee Sedol is ranked fifth in the world at Go, an ancient board game that relies on a player's intuition to surround and capture an opponent's stones on a grid. Yet he's just been schooled by a relative newbie. Google's 2-year-old computer program AlphaGo, built at artificial intelligence lab DeepMind, defeated Sedol, 33, in the first of five scheduled matches in Seoul on Wednesday, leaving him "in shock," reports NBC News. Though Go has long been thought too complicated for a computer to learn—"it is primarily a game about intuition rather than brute-force calculation used in chess," DeepMind's CEO explains—AlphaGo uses reinforcement learning. It studied 100,000 matches, then learned to look for the best moves, reports NPR. It defeated the 633rd-ranked Go player in October and has improved by playing millions of games against itself.
Throughout the 3.5-hour match Wednesday, viewers were impressed to see AlphaGo playing much like a human, per Wired. The computer matched Sedol's pace and often moved on the offensive. It also moved to reinforce weak groups of stones, just as a top player would, says AlphaGo's operator, who physically moved the game pieces. At the same time, AlphaGo made moves "that could not have been possible for human being to choose," Sedol says. Despite the loss, humans shouldn't fear a future of robot overlords, Eric Schmidt of Alphabet (formerly Google) tells ZDNet. "As artificial intelligence and machine learning develop further, each and every person will become smarter and more talented." A second match is set for Thursday. (Read more artificial intelligence stories.)