Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Wednesday, May 27, 2020
David Silver on AlphaGo and AlphaZero (AI podcast)
I particularly liked this interview with David Silver on AlphaGo and AlphaZero. I suggest starting around ~35m in if you have some familiarity with the subject. (I listened to this while running hill sprints and found at the end I had it set to 1.4x speed -- YMMV.)
At ~40m Silver discusses the misleading low-dimensional intuition that led many to fear (circa 1980s-1990s) that neural net optimization would be stymied by local minima. (See related discussion: Yann LeCun on Unsupervised Learning.)
At one point Silver notes that the expressiveness of deep nets was never in question (i.e., whether they could encode sufficiently complex high-dimensional functions). The main empirical question was really about efficiency of training -- once the local minima question is resolved what remains is more of an engineering issue than a theoretical one.
Silver gives some details of the match with Lee Sedol. He describes the "holes" in AlphGo's gameplay that would manifest in roughly 1 in 5 games. Silver had predicted before the match, correctly, that AlphaGo might lose one game this way! AlphaZero was partially invented as a way to eliminate these holes, although it was also motivated by the principled goal of de novo learning, without expert examples.
I've commented many times that even with complete access to the internals of AlphaGo, we (humans) still don't know how it plays Go. There is an irreducible complexity to a deep neural net (and to our brain) that resists comprehension even when all the specific details are known. In this case, the computer program (neural net) which plays Go can be written out explicitly, but it has millions of parameters.
Silver says he worked on AI Go for a decade before it finally reached superhuman performance. He notes that Go was of special interest to AI researchers because there was general agreement that a superhuman Go program would truly understand the game, would develop intuition for it. But now that the dust has settled we see that notions like understand and intuition are still hidden in (spread throughout?) the high dimensional space of the network... and perhaps always will be. (From a philosophical perspective this is related to Searle's Chinese Room and other confusions...)
As to whether AlphaGo has deep intuition for Go, whether it can play with creativity, Silver gives examples from the Lee Sedol match in which AlphaGo 1. upended textbook Go theory previously embraced by human experts (perhaps for centuries?), and 2. surprised the human champion by making an aggressive territorial incursion late in the game. In fact, human understanding of both Chess and Go strategy have been advanced considerably via AlphaZero (which performs at a superhuman level in both games).
See also this Manifold interview with John Schulman of OpenAI.
No comments:
Post a Comment