Tuesday, June 10, 2008

More (morons?) on the Singularity

Very, very middle brow discussion of the Singularity here at the NYTimes. Yes, I mean by Kurzweil as well as others:

Kurzweil: For example, I point out that the complexity of the design of the brain is at least 100 million times simpler than it appears because the design is in the genome. Even including the genetic machinery that implements the genome, the compressed genome is only about 50 million bytes (which I analyze in the book), and that is a level of complexity we can handle.

Yes, Ray, and how much computing power will it take to explore a reasonable fraction of 2^(8*50,000,000) possibilities? Maybe by reverse-engineering what nature did, but how else? Previous discussion here.

Go read What is Thought? instead of wasting time with the Times on science... (Don't even think of looking at the comments there -- it will cost you 10 points of IQ.)

8 comments:

Anonymous said...

Why do you recommend WHAT IS THOUGHT? There are so many books on the topic, all widely recommended, that I am highly sceptical even of (unreasoned) recommendations of them.

Steve Hsu said...

I wrote a brief review some time ago (search on right), or click on link in post above to previous discussion.

Or look at the Amazon review of the book by Ed Witten :-)

If you don't know who Ed Witten is, do a bit more research.

Anonymous said...

I'm sure Ray has a plausible explanation. Nanobotic quantum computers powered by cold fusion or something. Clearly you're not taking enough supplements.

Steve Hsu said...

:-)

Carson C. Chow said...

Hey Steve,

I find it amusing that Kurzweil confuses power law with exponential.

cc

Seth said...

Steve,

Thanks for the book recommendation. I've started reading "What is Thought" and find it quite good. I need to get past the early generalities and background to assess the substance.

Baum's emphasis on finding the "compact structure" of the world seems like a pretty direct extrapolation from the idea of OLS regression (or the neural net equivalent) in the "good cases" (those where the parametric fitting is actually appropriate rather than merely misleading). It isn't clear to me yet whether he'll get to a genuinely "better mousetrap" later on. At least he may have absorbed more of the Rodney Brooks/George Lakoff "embodied" intelligence view point than most traditional AI (pure CS) people have.

Meanwhile, have you read "On Intelligence" by Jeff Hawkins? The HTM concept is a plausible attempt at a "better mousetrap" -- a pattern/exception detection and response system modeled directly on brain wiring. Hawkins is approaching the problem a little more head-on as an engineering challenge and may not devote as much attention to the philosophy of mind as some readers in this problem space have been conditioned to expect of authors.

Steve Hsu said...

I've heard a talk or two about Hawkins' HTM stuff but haven't thought that carefully about it. I do agree with the reverse-engineering style of attack -- I don't see a way to replicate the huge amount of computation (compression) that nature has done. I wonder whether HTM isn't just a particular sub-class of neural net and that they underestimate the amount of training (i.e., natural selection) required to produce something that has an internal representation (compressed version of) the underlying structure of the world.

BTW, I think, in agreement with Baum and others, that compression is the key idea in AI. Any physicist knows that the world around us is very compressible. In principle, it can be modeled using a very short program which simply encodes the basic rules of physics -- the algorithmic complexity is not great, even if the computational complexity is.

Seth said...

Compression is certainly a key element. As for the amount of training involved -- there are at least two different dimensions to the training/evolution problem: a) finding the right type of model, b) tuning that model's parameters. It isn't clear to me that these are both happening via the same mechanisms. The first seems like straight genetic evolution, the second might be handled by an organism during the course of its development.

Although it might be true that HTMs are "just" a subclass of neural networks (is that like saying a Calabi-Yau three-fold is "just" an algebraic variety?) part of the evolutionary problem is precisely that of figuring out what wiring topology is best for capturing the "compact structure". The specific parameters/weights might be easy enough to work out within an organism's lifespan.

I do suspect that HTM on its own will fall a bit short -- but more because of the lack of the evolved structures in the "old brain" underneath the neo-cortex. I have difficulty seeing how an HTM by itself can characterize a problem space. I think the old brain structures provide important boundary conditions without which the problem is simply ill-posed.

Blog Archive

Labels