Monday, October 09, 2017

Blade Runner 2049: Demis Hassabis (Deep Mind) interviews director Villeneuve



Hassabis refers to AI in the original Blade Runner, but it is apparent from the sequel that replicants are merely genetically engineered humans. AI appears in Blade Runner 2049 in the form of Joi. There seems to be widespread confusion, including in the movie itself, about whether to think about replicants as robots (i.e., hardware) with "artificial" brains, or simply superhumans engineered (by manipulation of DNA and memories) to serve as slaves. The latter, while potentially very alien psychologically (detectable by Voight-Kampff machine, etc.), presumably have souls like ours. (Hassabis refers to Rutger Hauer's decision to have Roy Batty release the dove when he dies as symbolic of Batty's soul escaping from his body.)

Dick himself seems a bit imprecise in his use of the term android (hardware or wet bioware?) in this context. "Electric" sheep? In a bioengineered android brain that is structurally almost identical to a normal human's?

Q&A at 27min is excellent -- concerning the dispute between Ridley Scott and Harrison Ford as to whether Deckard is a replicant, and how Villeneuve handled it, inspired by the original Dick novel.







Addendum: Blade Runner, meet Alien

The Tyrell-Weyland connection

Robots (David, of Alien Prometheus) vs Genetically Engineered Slaves (replicants) with false memories



Saturday, October 07, 2017

Information Theory of Deep Neural Nets: "Information Bottleneck"



This talk discusses, in terms of information theory, how the hidden layers of a deep neural net (thought of as a Markov chain) create a compressed (coarse grained) representation of the input information. To date the success of neural networks has been a mainly empirical phenomenon, lacking a theoretical framework that explains how and why they work so well.

At ~44min someone asks how networks "know" to construct (local) feature detectors in the first few layers. I'm not sure I followed Tishby's answer but it may be a consequence of the hierarchical structure of the data, not specific to the network or optimization.
Naftali (Tali) Tishby נפתלי תשבי

Physicist, professor of computer science and computational neuroscientist
The Ruth and Stan Flinkman professor of Brain Research
Benin school of Engineering and Computer Science
Edmond and Lilly Safra Center for Brain Sciences (ELSC)
Hebrew University of Jerusalem, 96906 Israel

I work at the interfaces between computer science, physics, and biology which provide some of the most challenging problems in today’s science and technology. We focus on organizing computational principles that govern information processing in biology, at all levels. To this end, we employ and develop methods that stem from statistical physics, information theory and computational learning theory, to analyze biological data and develop biologically inspired algorithms that can account for the observed performance of biological systems. We hope to find simple yet powerful computational mechanisms that may characterize evolved and adaptive systems, from the molecular level to the whole computational brain and interacting populations.
Another Tishby talk on this subject.

Tuesday, October 03, 2017

A Gentle Introduction to Neural Networks



"A gentle introduction to the principles behind neural networks, including backpropagation. Rated G for general audiences."

This very well done. If you have a quantitative background you can watch it at 1.5x or 2x speed, I think :-)

A bit more on the history of backpropagation and convexity: why is the error function convex, or nearly so?

Blog Archive

Labels