Here's a recent talk by DeepMind CEO Demis Hassabis, whose comments start @5:30 min. The content of this talk is suitable for a non-technical audience.
@39 min: AlphaGo value / policy nets, and tree search.
@1h03: over the summer DeepMind will look at the internal representations used in the valuation engine to see how they correspond to expert human intuitions about Go. This is like peeking into the mind of an alien creature that evolved fighting for territory in a 2D world with discrete spacetime :-)
Here's a related comment that appeared on a HNN thread about the Lee Sedol match:
As someone who studied AI in college and am a reasonably good amateur player, I have been following the matches between Lee and AlphaGo.Note AlphaGo almost certainly uses chunking of some sort ("feature identification" in the neural net terminology), but perhaps not the kind familiar to our brains, which evolved in the physical/biological world.
AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.
I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.
For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.
These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.
But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.
It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.
@1h04: No surprise, Hassabis seems to believe in strong AI.
Moore's Law and AI
DeepMind and Demis Hassabis
Deep Neural Nets and Go: AlphaGo beats European champion.