AlphaGO wins 4-1

AlphaGO won the final game of the series on Tuesday this week and finished the match 4-1. It was another exciting match with AG recovering from a mistake early in the game.

All in all, truly impressive performance! We are in the early days of seeing the evolution of learning architectures that can be somewhat general purpose.

Here is a link to Demis Hassabis writing about the match on Blogspot .

Awesome Lee!

Lee Sedol came back strong today with a nice win over AG. The game started off similar to Game 2 , and it appears that Lee Sedol made a brilliant move on the 78th turn that made the position complicated and to which AG did not come up with a goodresponse.  I guess this game will be studied quite a bit by the Deepmind team. It will be interesting to see what the program will learn from this game and how it will learn from this. Did Lee actually find a weakness in the program that cannot be fixed in short order? Next game is on Tuesday.

Summary at http://www.theverge.com/2016/3/13/11184328/alphago-deepmind-go-match-4-result

and here is the video with commentary on youtube:

Congrats AG!!!

Wow! Deepmind’s AlphaGO wins the third game as well to make it 3-0 against the forever world champ Lee Sedol! Great job by David Silver and team. What’s next Demis? From the Nature paper, it appears that some simple yet key features were explicitly programmed in along with the more general deep nets and reinforcement learning. Not clear about how much of domain knowledge went into the MCTS. Can they get rid of even that domain knowledge ?  

How about a robot playing Golf with the top pros?

For more on game 3 of AlphaGO-lee Sedol, see https://gogameguru.com/alphago-shows-true-strength-3rd-victory-lee-sedol/

Game 2 also goes to AlphaGO!!

Wow! AlphaGO beat legendary champion Lee Sedol in game 2 , following up on the stunning win in Game 1 yesterday. This went to overtime with each player getting 1 min for each move after the stipulated 2.5hrs per player. Interestingly enough, Google said that the program was confident of victory halfway through the game while no expert was clear on who was winning!

See http://www.theverge.com/2016/3/10/11191184/lee-sedol-alphago-go-deepmind-google-match-2-result and the video at https://www.youtube.com/watch?v=l-GsfyVCBu0

one more win for the tournament victory for AlphaGO!

alphaGO vs Lee Sedol – Round 1 to the machine

So there! alphaGO wins the first game against Lee Sedol, the legendary 9 DAN GO world champion. History has been made. The world champion said that an early mistake stayed with him till the end and may have cost him the match. He also said that he was surprised by some of the moves made the program – moves that he cannot imagine a human could ever make.

You can see the full three and a half hour game here with expert commentary : https://www.youtube.com/watch?v=vFr3K2DORc8

You can also play the game and read commentary at https://gogameguru.com/alphago-defeats-lee-sedol-game-1/

 

GO and the man-machine match

Image result for alphago

Source: deepmind.com

A major step forward in game AI a month or so ago with Deepmind’s result published in Nature magazine! The machine beat the European GO champ 5-0! Looking forward to the match with the legend Lee Sedol starting tomorrow!

GO is a beautiful , complex board game. It has resisted the efforts of AI practitioners for quite a while now but there have been some pretty good advances made with clever Tree Search techniques – notably MCTS or Monte Carlo Tree Search. A simple way to talk about the complexity is with the typical number of legal moves to explore at each step – much bigger than in Chess. But probably equally importantly, it seems harder to articulate why an expert good move is good – seemingly more intuitive than analytical.

From my reading of  the Nature publication ( Silver, D. et al. Nature 529, 484489 (2016)) , Deep mind’s program uses a combination of at least 4 key techniques & elegant ways to combine them:

  1. Monte Carlo Tree Search (used by the top GO programs)
  2. Deep Neural Nets  (to learn from a database of games)
  3. Deep Reinforcement Learning ( to learn from games played by the AI with itself)
  4. Domain Knowledge – some basic features were coded in for eg: whether a move at this position results in a Ladder capture or escape, Sensibleness – whether a move results in filling up an Eye, Number of Opponent stones that would be captured by a current move , Number of Own stones that would be captured, etc. Nuances related to Tree Search may also be construed as Domain Knowledge even if they are not as explicit as the basic features listed.

Additionally, the best version of the program uses a considerable amount of hardware with nearly 300 GPUs and a couple of thousand CPUs. Also see http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html

There are many interesting blogs that discuss the AlphaGo – Fan Hui series. Gary Marcus has one at https://backchannel.com/has-deepmind-really-passed-go-adc85e256bec  . An interesting piece with critical thinking is seen at http://www.milesbrundage.com/blog-posts/alphago-and-ai-progress.

What is most interesting to me here is the extent of Domain Knowledge related to GO that needs to be coded into the program. What are the concepts the program implicitly learns by playing itself or from the vast database? What may be a way to get the program to learn the concept behind Ladders or Sensibleness?

Who do we think we will the match – Lee Sedol or AlphaGo? I am not an expert at GO, so I wouldn’t hazard a guess. Reddit has a nice summary of what some expert GO players feel are the strengths and weaknesses of alphaGO. https://www.reddit.com/r/MachineLearning/comments/43fl90/synopsis_of_top_go_professionals_analysis_of/ Well, the program that beat Fan Hui has had several months to improve – both from machine learning and any improvements to the code itself, in addition to potentially using more powerful hardware.

I hope for a good fight. Who are you rooting for?