Monday, November 23, 2015

Contemplating the Future


A great profile of Nick Bostrom in the New Yorker. I often run into Nick at SciFoo and other similar meetings. When Nick is around I know there's a much better chance the discussion will stay on a highbrow, constructive track. It's surprising how often, even at these heavily screened elitist meetings, precious time gets wasted in digressions away from the main points.

The article is long, but very well done. The New Yorker still has it ... sometimes :-(

I was a bit surprised to learn Nick does not like Science Fiction. To take a particular example, Dune explores (very well, I think) a future history in which mankind has a close brush with AI takeover, and ends up banning machines that can think. At the same time, a long term genetic engineering program is taken up in secret to produce a truly superior human intellect. See also Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve.
New Yorker: ... Bostrom dislikes science fiction. “I’ve never been keen on stories that just try to present ‘wow’ ideas—the equivalent of movie productions that rely on stunts and explosions to hold the attention,” he told me. “The question is not whether we can think of something radical or extreme but whether we can discover some sufficient reason for updating our credence function.”

He believes that the future can be studied with the same meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The very long-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.

Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes. ...
I've never consumed Futurism as other than entertainment. (In fact I view most Futurism as on the same continuum as Science Fiction.) I think hard scientists tend to be among the most skeptical of medium to long term predictive power, and can easily see the mistakes that Futurists (and pundits and journalists) make about science and technology with great regularity. Bostrom is not in the same category as these others: he's very smart, tries to be careful, but remains willing to consider speculative possibilities.
... When he was a graduate student in London, thinking about how to maximize his ability to communicate, he pursued stand­­up comedy; he has a deadpan sense of humor, which can be found lightly buried among the book’s self-serious passages. “Many of the points made in this book are probably wrong,” he writes, with an endnote that leads to the line “I don’t know which ones.”

Bostrom prefers to act as a cartographer rather than a polemicist, but beneath his exhaustive mapping of scenarios one can sense an argument being built and perhaps a fear of being forthright about it. “Traditionally, this topic domain has been occupied by cranks,” he told me. “By popular media, by science fiction—or maybe by a retired physicist no longer able to do serious work, so he will write a popular book and pontificate. That is kind of the level of rigor that is the baseline. I think that a lot of reasons why there has not been more serious work in this area is that academics don’t want to be conflated with flaky, crackpot type of things. Futurists are a certain type.”

The book begins with an “unfinished” fable about a flock of sparrows that decide to raise an owl to protect and advise them. They go looking for an owl egg to steal and bring back to their tree, but, because they believe their search will be so difficult, they postpone studying how to domesticate owls until they succeed. Bostrom concludes, “It is not known how the story ends.”

The parable is his way of introducing the book’s core question: Will an A.I., if realized, use its vast capability in a way that is beyond human control?

No comments:

Post a Comment