Thursday, June 05, 2008

The Singularity, AI and IEEE

An entire special issue of IEEE Spectrum has been devoted to the Singularity, with contributions from people like Vernor Vinge, Rodney Brooks, Gordon Moore and Douglas Hofstader. I'm confident it won't happen in my lifetime. I don't even think a machine will pass a strong version of the Turing test while I am around.

My favorite book on AI is Eric Baum's What is Thought? (Google books version). Baum (former theoretical physicist retooled as computer scientist) notes that evolution has compressed a huge amount of information in the structure of our brains (and genes), a process that AI would have to somehow replicate. A very crude estimate of the amount of computational power used by nature in this process leads to a pessimistic prognosis for AI even if one is willing to extrapolate Moore's Law well into the future. Most naive analyses of AI and computational power only ask what is required to simulate a human brain, but do not ask what is required to evolve one. I would guess that our best hope is to cheat by using what nature has already given us -- emulating the human brain as much as possible.

This perspective seems quite obvious now that I have kids -- their rate of learning about the world is clearly enhanced by pre-evolved capabilities. They're not generalized learning engines -- they're optimized to do things like recognize patterns (e.g., faces), use specific concepts (e.g., integers), communicate using language, etc.

What is Thought?

In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life must be explainable at a fundamental level by physics and chemistry, Baum contends that the present-day inability of computer science to explain thought and meaning is no reason to doubt there can be such an explanation. Baum argues that the complexity of mind is the outcome of evolution, which has built thought processes that act unlike the standard algorithms of computer science and that to understand the mind we need to understand these thought processes and the evolutionary process that produced them in computational terms.

Baum proposes that underlying mind is a complex but compact program that exploits the underlying structure of the world. He argues further that the mind is essentially programmed by DNA. We learn more rapidly than computer scientists have so far been able to explain because the DNA code has programmed the mind to deal only with meaningful possibilities. Thus the mind understands by exploiting semantics, or meaning, for the purposes of computation; constraints are built in so that although there are myriad possibilities, only a few make sense. Evolution discovered corresponding subroutines or shortcuts to speed up its processes and to construct creatures whose survival depends on making the right choice quickly. Baum argues that the structure and nature of thought, meaning, sensation, and consciousness therefore arise naturally from the evolution of programs that exploit the compact structure of the world.


Anonymous said...

What I find astonishing is the fact that the functionality of the brain we find the most difficult to understand (human consciousness and all that..) took only about 2Mio years to evolve.
In other words, once you have the brain of Australopithecus it is only a small step to the brain of Kant, Einstein, Mozart etc.

Steve Hsu said...


Great point. I think it is consistent with Baum's (and my) point of view: the hard work is producing a system that "knows" about the underlying structure of the world around us, that encodes the information in (what is apparently, e.g., DNA = few GB) a fairly short program. Once this is done the leap from (e.g.) a monkey to a human isn't so hard.

A monkey (or even a frog or insect) already has algorithms for, e.g., image processing that are much, much better than anything computer scientists have now. Their brains can ascribe "meaning" to the image data -- that is a tree, that is a predator, etc.

At the hardware level, animal eyes are optimized almost to the theoretical limit -- they can almost detect single photons in a dark room.

Carson C. Chow said...

Hi Steve,

Thanks for pointing out the issue. I gave my two cents on the singularity on my blog


Anonymous said...

What mystify me are consciousness (controlled thought) and self-consciousness/self-awareness. These are unique to the human mind and very difficult to implement in the computer. These self-referential capabilities make the human mind purposeful and introspective, and gave rise to notions such as the soul and the illusion of a dichotomy between mind and matter.

The mind models the world. I think part of these self references evolved from simple empathy, the ability to model what others feel and think to predict how they'll react, which is useful for survival in a troop.

When empathy is used to model what others think of one-self and one's own actions -- the sensation of eyes on me" -- it becomes self referential. When the "others" is taken out of the loop, it becomes consciousness.

I think the rise of consciousness is the real break thru of the past 2 million years. In fact, the break thru might have taken place even more recently. The bible and Homer appear only half self-conscious; gods and voices figure prominently. Literary description of full awareness of how one's own mind thinks and makes deliberate decisions -- inner devels -- and a level of control of such processes, appeared much later in the writen record.

So, I don't think the limiting factor is compute power. Computers can already model many -- but not all -- natural and social processes better than humans. I think is the limiting factor is the lack of an effective paradigm of programming consciousness. Self referenctial logic is a higher order logical, in which Godel's incompleteness is proven. Computers use lower order logic systems, wherein self references often leads to infinite loops and stack overflows. There is no incremental way to pass from simple logic to consciousness; a singularity/critical point is involved.

Anonymous said...

Visual transduction is at the theoretical limit, in some sense. If one molecule of visual pigment gets flipped into a different shape by a photon, it changes the shape of the associated protein, which can stay flipped long enough to move enough ions around to set off an action potential in the membrane. The mechanism is that sensitive only in rod cells and only if they're dark-adapted. And there's multiple neurons in series before the one projecting to the optic nerve, so the photon may not trigger a message back to the brain. But this does say that there's no evolutionary pressure to develop more sensitivity at this first step of the process. Truly amazing.

Seth said...


I think I agree with your take on AI -- and Wolfgang's point about "only 2 million years" from Australopithecus to Kant is essentially the reason why. The stuff that happened earlier must carry more of the weight. (Anyone who has ever made a joke at the expense of an intellectual should appreciate the irony in that).


Your comment reminds me of a wildly unscientific, but thoroughly entertaining book from the 70's I think by Julian Jaynes. His premise was that the Homeric era was a pre-conscious human culture.

I just noticed there is a website devoted to Jaynes.

Steve Hsu said...

Jaynes' proposal is fascinating. I wish I had enough time to dig deeper and convince myself one way or another whether he could be correct.

Note that Cochran and company have proposed that selection over the last 5-10k years means that modern humans are potentially very, very different cognitively than their ancestors. There is a lot of genomic evidence for strong selection on a large number of genes during that period.

Blog Archive