Ask and ye shall receive :-)
In an earlier post I recommended a talk by Ilya Sutskever of OpenAI (part of an MIT AGI lecture series). In the Q&A someone asks about the status of backpropagation (used for training of artificial deep neural nets) in real neural nets, and Ilya answers that it's currently not known how or whether a real brain does it.
Almost immediately, neuroscientist James Phillips of Janelia provides a link to a recent talk on this topic, which proposes a specific biological mechanism / model for backprop. I don't know enough neuroscience to really judge the idea, but it's nice to see cross-fertilization between in silico AI and real neuroscience.
See here for more from Blake Richards.
No comments:
Post a Comment