This is a recent, and fairly non-technical, introduction to Deep Learning by Geoff Hinton.
In the most interesting part of the talk (@25 min; see arxiv:1409.3215 and arxiv:1506.00019) he describes extracting "thought vectors" or semantic (meaning) relationships from plain text. This involves a deep net, human text, and resulting vectors of weights.
The slide below summarizes some history. Most of the theoretical ideas behind Deep Learning have been around for a long time. Hinton sometimes characterizes the advances as resulting from a factor of a million in hardware capability (increase in compute power and data availability), and an order of magnitude from new tricks. See also Moore's Law and AI.
No comments:
Post a Comment