Sunday, June 26, 2005

AI update

Two tidbits from the inexorable advance of artificial intelligence.

1) Two amateurs from New Hampshire (a database administrator and a soccer coach) won a recent international "freestyle" chess tournament, which included several grandmasters. Freestyle chess is team competition, including both humans and computers. The winning team (ZackS, anonymous throughout the tournament) used ordinary PCs and commercial chess software. Nevertheless, their play was so spectacular that many suspected the presence of Gary Kasparov!

The other untitled team, ZackS, is a dark horse. The identity of the people behind this team, and the method they are using, will be revealed after the tournament is over. Everybody assumes that there are one or more GMs working together with the team captain. The rumour was that Garry Kasparov was producing the extraordinary chess displayed by ZackS, but we can confirm that on the weekend of the quarter-finals Kasparov was most certainly otherwise engaged.

The standard of play is very high, possibly the highest ever seen in chess at these time controls. One would scarcely expect a human player, even the best in the world, to be able to face the precision and the strategic depth of some of the participants in this event.

2) An AI program has matched the average human performance on the verbal analogies portion of the SAT. (You know, "fish is to sea as monkey is to ...?") This is far short of passing the Turing test, but still an impressive feat of extracting relations from computer analysis of a terabyte of text. The program was written by Peter D. Turney's Interactive Information Group, Institute for Information Technology of the National Research Council Canada.


david bennett said...

The Turing test strikes me as an unlikely approach to gauging machine "intelligence." If we were to encounter an alien race would we gauge their mental abilities by their capacity to imitate us?

I suspect that if machines arrive at something that is agreed upon as intelligence they will "think" in unique ways.

Of course the standard is always receding. It's hard to remember that into the sixties chess was considered a good test. Of course once that standard is met new requirements are imposed. This isn't necesssarily as hypocritical as it seems, often we have succeeded in formalizing a task, in breaking it down into logical components and while these rarely mirror the human processing they do provide a growing map of elements of thought.

The human/machine chess symbiosis is something which I think is neglected. It was quite theme in the sixties and I think langaugaes and tools designed to "augment" (Engelbart's key phrase) human intellectual processes are the way to go. I personally believe one reason why "expert" and "knowledge" systems (the difference seeming to be primarily the number of facts and rules) didn't succeed was that rather than giving skilled indoviduals a useful notation that directed some useful logical engines and extending these as tools, the AI departments decided they would create "knowledge experts" who would codify the work of varying domains. The goal seemed to be "automate" (what Engelbart regards as the big strategic mistake) rather than extend disciplines with new self processing notations of organization.

Indeed even today few people live college even with graduate degrees with even a semester course on the issues of classification or a working knowledge of various logical tools. These exist, they have existed for decades, even very rudimentary forms such as linking in HTML allow increases in organizational complexity, yet there is no pressure to create extended notations as a basic scholarly skill.

steve said...

I suspect that truly advanced aliens, if sufficiently motivated, could imitate humans to some degree. Even more important, they could convince us that they have the ability to learn new concepts and use them in reasoning, which is I think the real content of the Turing test. (No intelligence which is unable to learn from a converstion, and demonstrate an improved understanding, could ever pass the Turing test.)

You might be interested in Eric Baum's perspective on AI, as described in his book "What is Thought?" Baum believes that evolution has compressed a huge amount of information in the structure of our brains (and genes), a process that AI would have to somehow replicate. If Baum is right, we would have a lot more in common with aliens (whose brains are the result of an analogous evolutionary process) than we do with the current generation of "thinking" machines.

david bennett said...

I think the real feature of the Turing test is that Turing threw out a plausible definition which appeared far more reachable at the time and people grabbed onto it.

Overall it strikes me as relevant only indirectly, just as the search for "intelligence" is more important than it's definition. We've seen the philosophical movement of standards in the naysayers, we see specific tasks formalized, we are forced to confront difficult problems.

For example if I can establish that under certain circumstances (and I think I can) that markets are better decision makers than even the most brilliant thinker are markets "intelligent?"

The problem as far as I can see is not that we haven't gone far in developing the answers to these questions, the bigger reality is that people come out of related disciplines (certainly computer "science" and forms of managament are among them) without knowing as much about these issues or "systems" or the nature of complexity as people like Boulding were laying out in popular books in the fifties.

What "intelligence" will be we can state empirically that whatever test is devised plausible arguments will be raised to prove it's not. However the structure of bits and pieces that mirror intelleigence, including functions that replace what were considered jobs for the highly intelligent are all around.

I still believe that if "intelligence" in some holistic human equivalent way is created then it will strike us as shockingly alien such as peering into the mind of a Von Neumann, it will be the differences not the similarities that we note.

Now suppose this "mind" is linked together from likely bits and pieces, some language parsing capacities, various logics, things we associate with the humans, it will have different capacities, it won't be limited by the "magic numbers 7 and 100 rule" which were supposedly the number of pieces a construct could be held as well as the number of pieces that could be linked, most likely (contrary to star Trek) it will be better programmed with logics that can deal with ambiguity, less either/or, it will sort with independance from social pressures... it will appear alien, indeed one could suspect it's impressions of us will appear like parodies because I doubt it possible to map or model the actuality of who we are and no model (at least as it's being developed) is going to accurately match "typical" behaviors.

Simulations take a while to develop and essentially that's what the Turing test asks. It says a being is intelligent not when it has the capacity to start modeling a system immensely more complex than Newtonian physics, but when it can successfully model that system.

A similar test would be asking you to successfully model an Indonesian woman.

I am willing to agree that if you can do that you have achieved a certain intelligence, but it is the only sort? And does focus on specific types blind us to more interesting questions?

For example computers start off doing tasks (various forms of formal logic) that we find exceedingly difficult. On the other hand information processing that was trivialized in Turing's time such as finding the lines and shapes in an image still pushes capacities. And to the extent that computer based systems successfully engage in recognition they do so using algorithms completely different (we think) than those in the brain.

As these develop can they even develop into a form similar into our own? Varying theories of evolution of systems come into play here.

Blog Archive