대한민국 - 플래그 대한민국

통화 선택을 확인하십시오:

한국 원화
인코텀즈:FCA(배송점)
관세, 통관료 및 세금은 인도 시 징수됩니다.
통화에 따라 ₩60,000 (KRW) 이상 주문 시 대부분 무료 배송
신용카드로만 결제 가능합니다

미국 달러
인코텀즈:FCA(배송점)
관세, 통관료 및 세금은 인도 시 징수됩니다.
통화에 따라 $50 (USD) 이상 주문 시 대부분 무료 배송

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


Has AI Cracked Human Intuition? Google's AlphaGo vs Lee Sedol Mirko Bernacchi

On the surface, it appears to be among the simplest of games. An austere board marked with a 19 x 19 grid, where two players take turns placing black or white pieces. The rules are brutally simple: capture enemy pieces by surrounding them with your own; capture the most territory to win. But the game of Go has a 2,500 year history, 1,000 years longer than chess, and is a far more complex challenge than that younger game. Until recently, its upper ranks were considered beyond the reach of computers. A machine that could challenge the best players was seen as the holy grail of games-playing artificial intelligence (AI).

 

Yet, last month, a computer program, Google's AlphaGo, trounced one of the world's strongest players, Lee Sedol of South Korea, winning four of their five game match.


 

It was an unexpected triumph. Back in 1997, when IBM's Deep Blue became the first computer to defeat a human world chess champion in a match, Go-playing computers were still a joke amongst serious players. A professional Go player could sit back and give the computer an enormous 20 or 30 move head start, and, when the human finally deigned to start playing, they would still crush their hapless electronic opponent. Even last year—by which time a mobile phone chess program could outclass the human chess world champion—pundits were still confidently forecasting that Go's top levels would remain human territory for another decade.

 

There was solid logic behind these predictions. The bastion of chess had fallen to brute force processing power. Computers had become so quick that they could calculate the outcome of almost every possible series of moves, many turns ahead. Go, however could not be overcome so easily.

 

Despite its simpler rules, the ancient game is vastly more complicated than chess, partly because the five-times-bigger playing area means the number of moves available on each succeeding turn is at least an order of magnitude larger. Exhaustively checking every possibility several moves ahead soon results in a combinatorial explosion. There are more ways to play a game of Go than there are atoms in the universe.

 

With so many combinations to explore, the brute force approach simply doesn’t work.  But there is a way forward. After all, a human chess or Go player certainly doesn’t laboriously consider every possible series of moves. Instead, we use intuition, built up through experience. A practiced Go player can almost instantly recognize broad patterns that are similar to those they've seen before, focus on areas and strategies that experience tells them are promising, and react with moves informed by tactics that have worked in the past. Computers needed to adopt a more human approach, to develop something like intuition.

 

What we're seeing with AlphaGo is not so much progress in hardware, as progress in software. This is where the most significant advances are being made. Modern game playing machines are not always faster than their predecessors, but they are much smarter.

 

To develop a more intuitive machine, AlphaGo's creators first trained a neural network, eventually leaving it to play millions of games of Go against itself. Essentially, AlphaGo was learning from experience, much as a human does. It wasn't quite as simple as that, of course. One of the key breakthroughs was made by another researcher a few years earlier with the application of the Monte Carlo method to Go—the Monte Carlo tree search algorithm offers a method of discovering good moves, without having to exhaustively search every single possibility. The AlphaGo team combined this technique with Google's powerful distributed hardware and their own neural network expertise.

 

It's informative to see how AlphaGo's performance scales up as more hardware is added—or how it fails to scale up. AlphaGo can run on a single CPU and 8 GPUs. Doubling the number of CPUs to two increases playing strength by 10%. But increasing the number of CPU cores from one to 64, and the number of GPUs from eight to 280, yields an increase of less than 50% in playing strength. As more hardware is added, a pattern of diminishing returns is apparent, and the performance graph trends towards a plateau.

 

In fact, even modern chess programs have backed away from Deep Blue's brute force approach. They use heuristic rules to focus on the most promising possibilities, rather than trying everything, so they need to examine 'only' tens of millions of positions per second.

 

Does AlphaGo’s success mean computers could soon be replacing human intuition in many other areas, not only board games? Maybe not, because although a game of Go is complicated, the physical location of each piece at any moment is simple and unambiguous—an ideal input for a neural network. In AI terms, Go offers both ‘perfect information’ and a deterministic future. The real world is not so easy to read, and not so predictable.



« Back


Mirko Bernacchi joined the Italian branch of Mouser Electronics in Assago in 2012 as a Technical Support Specialist. With more than 25 years of experience in electronics, Mirko provides expert technical assistance and support as well as customer service for our Italian office. He worked as a test development engineer at Celestica and Service for Electronic Manufacturing. At IBM he was a Burn-in memory modules test engineer and an Optical transceiver card test engineer, responsible for the installation of new test equipment, production test problem management and supplier interface as well as the introduction of new test routines.


All Authors

Show More Show More
View Blogs by Date

Archives