Friday, July 11, 2014

Minds and Machines


HLMI = ‘high–level machine intelligence’ = one that can carry out most human professions at least as well as a typical human. I'm more pessimistic than the average researcher in the poll. My 95 percent confidence interval has earliest HLMI about 50 years from now, putting me at ~ 80-90th percentile in this group as far as pessimism. I think human genetic engineering will be around for at least a generation or so before machines pass a "strong" Turing test. Perhaps a genetically enhanced team of researchers will be the ones who finally reach the milestone, ~ 100 years after Turing proposed it :-)
These are the days of miracle and wonder
This is the long-distance call
The way the camera follows us in slo-mo
The way we look to us all
The way we look to a distant constellation
That’s dying in a corner of the sky
These are the days of miracle and wonder
And don’t cry baby don’t cry
Don’t cry -- Paul Simon

Future Progress in Artificial Intelligence: A Poll Among Experts

Vincent C. Müller & Nick Bostrom

Abstract: In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity; in other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050, and move on to super-intelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

19 comments:

Pat Boyle said...

My personal benchmark is - when instead of checking the calendar in your smart phone for schedule conflicts to see if you can attend an event, you ask your phone for permission.
I won't live long enough to see that day, but then I have a lot of fatal diseases. You certainly will.

BobSykes said...

Count me among the most extreme skeptics of artificial intelligence and the Singularity. If you are a materialist ( a hypothesis), then the mere existence of human and animal minds proves that artificial intelligence is possible. However, Hubert Dreyfus' critique of artificial intelligence ("What Computers Still Can't Do"), while dated, is still relevant as it comes from a philosophical view point rather than a machine view point. Raymond Tallis ("Aping Mankind") is a neuroscientist who has strongly criticized neuroscience itself, especially the pseudo-science of brain imaging.


The fact is we do not have any but the most primitive kind of artificial intelligence (If, Then, Else, Go To), Siri and Watson not withstanding. We have absolutely no idea of how any brain works, and the bogus "brain function images" that pollute our scientific journals do not help.


A truly intelligent machine would have to operate at least the level of a frog. It would have to be mobile, interact with its environment, be self-actuating, "feed" itself, etc. I will not require it to be self-reproducing, If a frog is too high a bar, then I will settle for an ant or bee.


The fact that we don't even know how an ant works shows how far away we are. Artificial intelligence and the Singularity are generations away, assuming our civilization lasts that long.


If you want to worry about displacing American workers, especially men, and more especially black men, then look to (1) women in the workforce, (2) immigration, legal and illegal, (3) globalization, off-shoring jobs, imports, etc, and (4) automation. Those are the four items that are reducing real median incomes and the size of the middle class. The dreams and ambitions of Silicon Valley, industrialists and the Democrat and Republican Parties are rapidly converting the US into a Latin American failed state, a state with a small, very white, super-rich, self-sustaining, self-appointed and hereditary Ruling Class (and Deep State) lording it over a small middle class and an immiserated, dependent, subordinated peasantry.


The current drive for amnesty and open borders is the most vicious anti-black movement since the Civil War, much worse than the Jim Crow laws of the late Nineteenth Century, which sought merely to isolate blacks not actually destroy them.

5371 said...

This poll carries as much conviction as a survey of theologians to find out when they think the apocalypse will happen - or less.

redddi said...

Steve, tell us your list of most HLMI resistant professions.

DK said...

Count me among the most extreme skeptics of artificial intelligence and the Singularity

Add me too. The fact is, life/organic matter can organize and function in the ways that are fundamentally different from anything that we know as electronics/machines. As a result, the most simple single cell is behaviorally infinitely more complex than the biggest supercomputer anyone can even envision right now. So no, forget about strong AI in the lifetime of anyone living today. AI seems a perfect example of that thing that is always a mere 25-50 years away.

steve hsu said...

I don't have an exhaustive list, but MMA fighter, Gigolo, and Call Girl must be near the top :-)

redddi said...

haha, can't help but notice that all three are unintellectual, more dependent on "skill" than smarts, and very people-oriented.

Bibibibibib Blubb said...

Sorry for unrelated comment but are you using GCTA or some new method for your genius genome project?

steve is chinese said...

steve's still gettin' gay with kids.

dxie48 said...

Is this counted as AI? Nigroponte predicts Knowledge acquisition without human effort by simply popping a pill.

http://www.huffingtonpost.com/2014/07/10/learn-language-pill-drugs-video_n_5574748.html?utm_hp_ref=world&ir=World

"We have been doing a lot of consuming of information through our
eyes. That may be a very inefficient channel. My prediction is that we
are going to ingest information," he says in the video. "You're going to
swallow a pill and know English. You're going to swallow a pill and
know Shakespeare."

How exactly would we absorb this new knowledge
from the pill? While he's short on the details, Negroponte said that
the pill he has envisioned would follow the bloodstream to the brain,
where it would deposit pieces of information in the right places.

BobSykes said...

Good point. Right now we can't even simulate an E. coli cell.

James Hedman said...

With experts like Negroponte making silly statements like this, which are at the level of a badly written pulp sci-fi story of the 1930's, it really brings into question the validity of a poll primarily consisting of his colleagues in his field (Ray Kurzweil being one of them; a wet analogue parallel data processing entity that thinks he will live forever by taking enough vitamin pills.) In fact, computer programs still can't come close to beating humans masters in the completely deterministic game of Go. That this will eventually occur has a 100% probability and I wouldn't worry about HLMI until that step and others like it are achieved.

Also the high fear levels of "bad" AI in the polled respondents probably reflects more of their fear of some merely semi-intelligent insect level form of artificial life that could self-replicate in a geometrical fashion and cover the earth with a thick seething dust of nanobots a la the Mechanical Mice of Maurice A. Hugi's (Eric Frank Russell) 1941 short story of the same name (good pulp) or Michael Crichton's novel Swarm, rather than some HLMI Saberhagen-esque berserkers, Skynet controlled terminators, a HAL 9000 computer, or the cybernetic Borg.

I would put a more virulent strain of a mutated virus like bird flu or ebola as presently a greater risk to humanity than nanobots although a successful virus by its very nature would not cause a 100% death rate for humans which nanobots could. Given recent finds of bacteria at extremely deep levels within the earth, bacteria will be able to survive even a nasty solar flare that kills most everything else leaving open the possibility that biological intelligence will evolve and re-appear even if higher levels of life are wiped out by a solar flare or close by galactic x-ray event, at least until the Sun expands to incinerate/melt the entire globe in its red-giant stage.

A third threat to ordinary humanity that might be more likely would be genetically engineered intelligent super-humans who might decide to sterilize the rest of us redundant mouths to feed, although they might keep some naturally bred humans in the zoo or closely monitored game preserves to prevent us from inventing too much technology again.

James Hedman said...

With experts like Negroponte making silly statements like this, which are at the level of a badly written pulp sci-fi story of the 1930's, it really brings into question the validity of a poll primarily consisting of his colleagues in his field (Ray Kurzweil being one of them; a wet analogue parallel data processing entity that thinks he will live forever by taking enough vitamin pills.) In fact, computer programs still can't come close to beating humans masters in the completely deterministic game of Go. That this will eventually occur has a 100% probability and I wouldn't worry about HLMI until that step and others like it are achieved.

Also the high fear levels of "bad" AI in the polled respondents probably reflects more of their fear of some merely semi-intelligent insect level form of artificial life that could self-replicate in a geometrical fashion and cover the earth with a thick seething dust of nanobots a la the Mechanical Mice of Maurice A. Hugi's (Eric Frank Russell) 1941 short story of the same name (good pulp) or Michael Crichton's novel Swarm, rather than some HLMI Saberhagen-esque berserkers, Skynet controlled terminators, a HAL 9000 computer, or the cybernetic Borg.

I would put a more virulent strain of a mutated virus like bird flu or ebola as presently a greater risk to humanity than nanobots although a successful virus by its very nature would not cause a 100% death rate for humans which nanobots could. Given recent finds of bacteria at extremely deep levels within the earth, bacteria will be able to survive even a nasty solar flare that kills most everything else leaving open the possibility that biological intelligence will evolve and re-appear even if higher levels of life are wiped out by a solar flare or close by galactic x-ray event, at least until the Sun expands to incinerate/melt the entire globe in its red-giant stage.

A third threat to ordinary humanity that might be more likely would be genetically engineered intelligent super-humans who might decide to sterilize the rest of us redundant mouths to feed, although they might keep some naturally bred humans in the zoo or closely monitored game preserves to prevent us from inventing too much technology again.

BobSykes said...

I think this is very promising. I believe C. elegans has one of the simplest neural networks, maybe the simplest, so its analysis is doable. But note that it hasn't actually been done. They're starting but they don't know how the network works, except that they can break parts of it and stop some behavior.

In higher organisms (and perhaps C. elegans) we don't know how memories are created and stored--not a clue. Also, we don't know how the logic networks work. We don't even know what they are, which is why the work on C. elegans is so important. So there's this wiring diagram, but none of the parts are colored coded, so we don't know what's a capacitor, what's a resistor, etc., etc. We don't know how the circuits add, subtract, etc. We can't even identify the circuits doing each task, although I think they will eventually with elegans.

Our super computers are many orders of magnitude less complicated than the human brain, and it will take a super computer of the same order of complexity to simulate one. But we don't recognize what the brain's parts are or where they're located, except very roughly, the visual cortex e.g..

Raymond Tallis' "Aping Mankind" is a critique of neuroscience by a neuroscientist. He believes that the results of neuroscience are very scanty and very oversold. E.g., most (all?) published brains scans are actually the average overage several scans, and the individual scans often are different from the average in many particulars.

MUltan said...

These people may know computers, but they don't know about quantifying natural intelligence. Measured on a ratio scale (equal-interval with zero) using Rasch measures, human intelligence varies only by about 15% between an IQ 100 and an IQ 160 adult. The 160 IQ adult is also only about 36% smarter than the average 2.25 year-old.

Getting to the intelligence of a small child is the big challenge. After that things should move quite quickly, with just a 50% improvement in absolute intelligence being super-human. Getting to human level is going to take longer than they think, but getting to super-human level will follow quite quickly.

Hacienda said...

I hope this happens. No more schools, no more tests. No more fathers, mothers, children, friends. No more nuthin.
Just pure energy.

Richard Seiter said...

Agreed about C. elegans. It seems about the right size to me for a proof of concept. Large enough to be interesting, small enough to be possible to simulate. It will be interesting to see what is accomplished in this area. It's just that there is so much complexity to model. Supposedly C. elegans shows some of the same responses to nicotine as mammals do. How to model that?!

One thing to remember before concluding "it will take a super computer of the same order of complexity to simulate one" is that the computer has a significant advantage in raw speed (orders of magnitude). Some of that can likely be traded for complexity. Agreed that much of the research/results in this area is oversold (pretty much a given once the mass media is involved).

DK said...

It does not count as AI. It counts as bullshitting.

Pat Boyle said...

Yesterday I found Burroughs's "A Princess of Mars' in my Kindle. I read it. It was very like the movie except for how Carter learns to speak Martian. In the novella the Martian language is described as very simple. In the movie he takes a potion and wakes up in the morning completely fluent.
The take a 'knowledge pill' plot idea seems to be passé. It was based on the now discredited RNA Memory theory. Anyway why bother with a pill? If I need to know something of Shakespeare I just get up, go to my computer and fire up Google. Presumably there will soon be a direct to brain interface and I won't have to get up anymore. When your skull is wired for Wi-Fi who wants a pill?

Blog Archive

Labels