Thursday, April 21, 2016

Deep Learning tutorial: Yoshua Bengio, Yann Lecun NIPS 2015



I think these are the slides.

One of the topics which I've remarked on before is the absence of local minima in the high dimensional optimization required to tune these DNNs. In the limit of high dimensionality a critical point is overwhelmingly likely to be a saddlepoint (have at least one negative eigenvalue). This means that even though the surface is not strictly convex the optimization is tractable.

No comments:

Post a Comment