If you have thought a lot about AI and deep learning you may find much of this familiar. Nevertheless I enjoyed the discussion. Apparently Chollet's views (below) are controversial in some AI/ML communities but I do not understand why.
Chollet's Abstraction and Reasoning Corpus (ARC) = Raven's Matrices for AIs :-)
This is Chollet's paper which is the focus of much of the discussion.
Show Notes:
...Francois has a clarity of thought that I've never seen in any other human being! He has extremely interesting views on intelligence as generalisation, abstraction and an information conversation ratio. He wrote on the measure of intelligence at the end of 2019 and it had a huge impact on my thinking. He thinks that NNs can only model continuous problems, which have a smooth learnable manifold and that many "type 2" problems which involve reasoning and/or planning are not suitable for NNs. He thinks that many problems have type 1 and type 2 enmeshed together. He thinks that the future of AI must include program synthesis to allow us to generalise broadly from a few examples, but the search could be guided by neural networks because the search space is interpolative to some extent.
Tim Intro [00:00:00]
Manifold hypothesis and interpolation [00:06:15]
Yann LeCun skit [00:07:58]
Discrete vs continuous [00:11:12]
NNs are not turing machines [00:14:18]
Main show kick-off [00:16:19]
DNN models are locally sensitive hash tables and only efficiently encode some kinds of data well [00:18:17]
Why do natural data have manifolds? [00:22:11]
Finite NNs are not "turing complete" [00:25:44]
The dichotomy of continuous vs discrete problems, and abusing DL to perform the former [00:27:07]
Reality really annoys a lot of people, and ...GPT-3 [00:35:55]
There are type one problems and type 2 problems, but...they are enmeshed [00:39:14]
Chollet's definition of intelligence and how to construct analogy [00:41:45]
How are we going to combine type 1 and type 2 programs? [00:47:28]
Will topological analogies be robust and escape the curse of brittleness? [00:52:04]
Is type 1 and 2 two different physical systems? Is there a continuum? [00:54:26]
Building blocks and the ARC Challenge [00:59:05]
Solve ARC == intelligent? [01:01:31]
Measure of intelligence formalism -- it's a whitebox method [01:03:50]
Generalization difficulty [01:10:04]
Lets create a marketplace of generated intelligent ARC agents! [01:11:54]
Mapping ARC to psychometrics [01:16:01]
Keras [01:16:45]
New backends for Keras? JAX? [01:20:38]
Intelligence Explosion [01:25:07]
Bottlenecks in large organizations [01:34:29]
Summing up the intelligence explosion [01:36:11]
Post-show debrief [01:40:45]
On the Measure of Intelligence
François Chollet
https://arxiv.org/abs/1911.01547
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.Notes on the paper by Robert Lange (TU-Berlin), including illustrations like the ones below.
No comments:
Post a Comment