Saturday, October 24, 2009

Eric Baum: What is Thought?

Last week we had AI researcher and former physicist Eric Baum here as our colloquium speaker. (See here for an 11 minute video of a similar, but shorter, talk he gave at the 2008 Singularity Summit.)

Here's what I wrote about Baum and his book What is Thought back in 2008:

My favorite book on AI is Eric Baum's What is Thought? (Google books version). Baum (former theoretical physicist retooled as computer scientist) notes that evolution has compressed a huge amount of information in the structure of our brains (and genes), a process that AI would have to somehow replicate. A very crude estimate of the amount of computational power used by nature in this process leads to a pessimistic prognosis for AI even if one is willing to extrapolate Moore's Law well into the future. Most naive analyses of AI and computational power only ask what is required to simulate a human brain, but do not ask what is required to evolve one. I would guess that our best hope is to cheat by using what nature has already given us -- emulating the human brain as much as possible.

This perspective seems quite obvious now that I have kids -- their rate of learning about the world is clearly enhanced by pre-evolved capabilities. They're not generalized learning engines -- they're optimized to do things like recognize patterns (e.g., faces), use specific concepts (e.g., integers), communicate using language, etc.

What is Thought?

In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life must be explainable at a fundamental level by physics and chemistry, Baum contends that the present-day inability of computer science to explain thought and meaning is no reason to doubt there can be such an explanation. Baum argues that the complexity of mind is the outcome of evolution, which has built thought processes that act unlike the standard algorithms of computer science and that to understand the mind we need to understand these thought processes and the evolutionary process that produced them in computational terms.

Baum proposes that underlying mind is a complex but compact program that exploits the underlying structure of the world. He argues further that the mind is essentially programmed by DNA. We learn more rapidly than computer scientists have so far been able to explain because the DNA code has programmed the mind to deal only with meaningful possibilities. Thus the mind understands by exploiting semantics, or meaning, for the purposes of computation; constraints are built in so that although there are myriad possibilities, only a few make sense. Evolution discovered corresponding subroutines or shortcuts to speed up its processes and to construct creatures whose survival depends on making the right choice quickly. Baum argues that the structure and nature of thought, meaning, sensation, and consciousness therefore arise naturally from the evolution of programs that exploit the compact structure of the world.

When I first looked at What is Thought? I was under the impression that Baum's meaning, underlying structure and compact program were defined in terms of algorithmic complexity. However, it's more complicated than that. While Nature is governed by an algorithmically simple program (the Standard Model Hamiltonian can, after all, be written down on a single sheet of paper) a useful evolved program has to run in a reasonable amount of time, under resource (memory, CPU) constraints that Nature itself does not face. Compressible does not imply tractable -- all of physics might reduce to a compact Theory of Everything, but it probably won't be very useful for designing jet airplanes.

Useful programs have to be efficient in many ways -- algorithmically and computationally. So it's not a tautology that Nature is very compressible, therefore there must exist compact (useful) programs that exploit this compressibility. It's important that there are many intermediate levels of compression (i.e., description -- as in quarks vs molecules vs bulk solids vs people), and computationally effective programs to deal with those levels. I'm not sure what measure is used in computer science to encompass both algorithmic and computational complexity. Baum discusses something called minimum description length, but it's not clear to me exactly how the requirement of effective means of computation is formalized. In the language of physicists, Baum's compact (useful) programs are like effective field theories incorporating the relevant degrees of freedom for a certain problem -- they are not only a compressed model of the phenomena, but also allow simple computations.

Evolution has, using a tremendous amount of computational power, found these programs, and our best hope for AI is to exploit their existence to speed our progress. If Baum is correct, the future may be machine learning guided by human Mechanical Turk workers.

Baum has recently relocated to Berkeley to pursue a startup based on his ideas. (Ah, the excitement! I did the same in 2000 ...) His first project is to develop a world class Go program (no lack of ambition :-), with more practical applications down the road. Best of Luck!

Blog Archive

Labels