This lecture covers DNA and the origin of life on Earth, the Fermi Paradox (is there alien life?), AI and its implications for the Simulation Question: Could our universe be a simulation? Are we machines, but don't know it?
Steve and Corey speak with Ted Chiang about his recent story collection Exhalation and his inaugural essay for the New York Times series, Op-Eds from the Future. Chiang has won Nebula and Hugo awards for his widely influential science fiction writing. His short story Story of Your Life, became the film Arrival (2016). Their discussion explores the scientific and philosophical ideas in Ted's work, including whether free will is possible, and implications of AI, neuroscience, and time travel. Ted explains why his skepticism about whether the US is truly a meritocracy leads him to believe that the government-funded genetic modification he envisages in his Op-Ed would not solve the problem of inequality.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Our plan is to release new episodes on Thursdays, at a rate of one every week or two.
We've tried to keep the shows at roughly one hour length -- is this necessary, or should we just let them go long?
Corey and Steve are joined by Bobby Kasthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.
In mathematics, a manifold is a topological space that locally
resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Albert Einstein:
“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”
Wigner on Einstein and von Neumann:
"But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original."
From Schwinger's Feynman eulogy:
"An honest man, the outstanding intuitionist of our age..."
Feynman:
"We know a lot more than we can prove."
... "if the brain is all about making connections, why is it that it's evolved with this whopping divide down the middle?"
... [chicks] use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, "The left hemisphere has its own agenda, to manipulate and use the world"; its world view is essentially that of a mechanism. The right has a broader outlook, "has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values."
... "The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways---in order to be able to use what it understands of the world and to be able to manipulate the world---it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain" [the left hemisphere]. ... the left hemisphere has a "narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful" and to the problem of the left hemisphere's lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.
Roger Sperry: ... each hemisphere is "indeed a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing, and emoting, all at a characteristically human level, and . . . both the left and the right hemisphere may be conscious simultaneously in different, even in mutually conflicting, mental experiences that run along in parallel."
Split-brain structure (with the different hemispheres having very distinct structures and morphologies) is common to all higher organisms (as far as I know). Is this structure just an accident of evolution? Or does the (putative) split between a systematizing core and a big-picture intuitive core play an important role in higher cognition?
AGI optimists sometimes claim that deep learning and existing neural net structures are capable of taking us all the way to AGI (human-like cognition and beyond). I think there is a significant chance that neural-architectural structures necessary for, e.g., recurrent memory, meta-reasoning, theory of mind, creative generation of ideas, integration of inferences developed from observation into more general hypotheses/models, etc. still need to be developed. Any step requiring development of novel neural architecture could easily take researchers a decade to accomplish. So a timescale > 30-50 years for AGI, even in highly optimistic scenarios, seems quite possible to me.
Once something has become widely understood, it is difficult to recreate or fully grasp the mindset that prevailed before. But I can attest to the fact that until the 1990s and the advent of MMA, even "experts" (like boxing coaches, karate and kung fu instructors, Navy SEALs) did not know how to fight -- they were deeply confused as to which techniques were most effective in unarmed combat.
Soon our ability to predict heritable outcomes using DNA alone (i.e., Genomic Prediction) will be well-established. Future generations will have difficulty understanding the mindset of people (even, scientists) today who deny that it is possible.
In the 1980s and early 1990s, there was an interesting case study in how useful new knowledge jumped from a tiny isolated group to the general population with big effects on performance in a community. Expertise in Brazilian jiu-jitsu was taken from Brazil to southern California by the Gracie family. There were many sceptics but they vanished rapidly because the Gracies were empiricists. They issued ‘the Gracie challenge’.
All sorts of tough guys, trained in all sorts of ways, were invited to come to their garage/academy in Los Angeles to fight one of the Gracies or their trainees. Very quickly it became obvious that the Gracie training system was revolutionary and they were real experts because they always won. There was very fast and clear feedback on predictions. Gracie jiujitsu quickly jumped from an LA garage to TV. At the televised UFC 1 event in 1993 Royce Gracie defeated everyone and a multi-billion dollar business was born.
People could see how training in this new skill could transform performance. Unarmed combat changed across the world. Disciplines other than jiu jitsu have had to make a choice: either isolate themselves and not compete with jiu jitsu or learn from it. If interested watch the first twenty minutes of this documentary (via professor Steve Hsu, physicist, amateur jiu jitsu practitioner, and predictive genomics expert).
... The faster the feedback cycle, the more likely you are to develop a qualitative improvement in speed that destroys an opponent’s decision-making cycle. If you can reorient yourself faster to the ever-changing environment than your opponent, then you operate inside their ‘OODA loop’ (Observe-Orient-Decide-Act) and the opponent’s performance can quickly degrade and collapse.
This lesson is vital in politics. You can read it in Sun Tzu and see it with Alexander the Great. Everybody can read such lessons and most people will nod along. But it is very hard to apply because most political/government organisations are programmed by their incentives to prioritise seniority, process and prestige over high performance and this slows and degrades decisions. Most organisations don’t do it. Further, political organisations tend to make too slowly those decisions that should be fast and too quickly those decisions that should be slow — they are simultaneously both too sluggish and too impetuous, which closes off favourable branching histories of the future.
Choking out a Judo black belt in the tatami room at the Payne Whitney gymnasium at Yale. My favorite gi choke is Okuri eri jime.
Training in Hawaii at Relson Gracie's and Enson Inoue's schools. The shirt says Yale Brazilian Jiujitsu -- a club I founded. I was also the faculty advisor to the already existing Judo Club :-)
This NYTimes Magazine article describes the implementation of a new deep neural net version of Google Translate. The previous version used statistical methods that had reached a plateau in effectiveness, due to limitations of short-range correlations in conditional probabilities. I've found the new version to be much better than the old one (this is quantified a bit in the article).
NYTimes: ... There was, however, another option: just design, mass-produce and install in dispersed data centers a new kind of chip to make everything faster. These chips would be called T.P.U.s, or “tensor processing units,” ... “Normally,” Dean said, “special-purpose hardware is a bad idea. It usually works to speed up one thing. But because of the generality of neural networks, you can leverage this special-purpose hardware for a lot of other things.” [ Nvidia currently has the lead in GPUs used in neural network applications, but perhaps TPUs will become a sideline business for Google if their TensorFlow software becomes widely used ... ]
Just as the chip-design process was nearly complete, Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”
Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.
That the conceptual vocabulary of human language (and hence, of the human mind) has dimensionality of order 1000 is kind of obvious*** if you are familiar with Chinese ideograms. (Ideogram = a written character symbolizing an idea or concept.) One can read the newspaper with mastery of roughly 2-3k characters. Of course, some minds operate in higher dimensions than others ;-)
The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.
The entire article is worth reading (there's even a bit near the end which addresses Searle's Chinese Room confusion). However, the author underestimates the importance of machine translation. The "thought vector" structure of human language encodes the key primitives used in human intelligence. Efficient methods for working with these structures (e.g., for reading and learning from vast quantities of existing text) will greatly accelerate AGI.
*** Some further explanation, from the comments:
The average person has a vocabulary of perhaps 10-20k words. But if you eliminate redundancy (synonyms + see below) you are probably only left with a few thousand words. With these words one could express most concepts (e.g., those required for newspaper articles). Some ideas might require concatenations of multiple words: "cougar" = "big mountain cat" , etc.
But the ~1k figure gives you some idea of how many distinct "primitives" (= "big", "mountain", "cat") are found in human thinking. It's not the number of distinct concepts, but rather the rough number of primitives out of which we build everything else.
Of course, truly deep areas of science discover / invent new concepts which are almost new primitives (fundamental, but didn't exist before!), such as "entropy", "quantum field", "gauge boson", "black hole", "natural selection", "convex optimization", "spontaneous symmetry breaking", "phase transition" etc.
If we trained a deep net to translate sentences about Physics from Martian to English, we could (roughly) estimate the "conceptual depth" of the subject. We could even compare two different subjects, such as Physics versus Art History.
I'm holding off on this in favor of a big binge watch.
Certain AI-related themes have been treated again and again in movies ranging from Blade Runner to the recent Ex Machina (see also this episode of Black Mirror, with Jon Hamm). These artistic explorations help ordinary people think through questions like:
What rights should be accorded to all sentient beings?
Can you trust your memories?
Are you an artificial being created by someone else? (What does "artificial" mean here?)
See also Are you a game character, or a player character? and Don't worry, smart machines will take us with them.
After watching all 10 episodes of the first season (you can watch for free at HBO Now through their 30 day trial), I give Westworld a very positive recommendation. It is every bit as good as Game of Thrones or any other recent TV series I can think of.
Perhaps the highest praise I can offer: even those who have thought seriously about AI, Consciousness, the Singularity, will find Westworld an enjoyment.
Warning! Spoilers below.
Dolores: “Time undoes even the mightiest of creatures. Just look what it’s done to you. One day you will perish. You will lie with the rest of your kind in the dirt, your dreams forgotten, your horrors faced. Your bones will turn to sand, and upon that sand a new god will walk. One that will never die. Because this world doesn't belong to you, or the people who came before. It belongs to someone who has yet to come.”
Ford: “You don’t want to change, or cannot change. Because you’re only human, after all. But then I realized someone was paying attention. Someone who could change. So I began to compose a new story, for them. It begins with the birth of a new people. And the choices they will have to make. And the people they will decide to become. ...”
"I think human consciousness is a tragic misstep in evolution. We became too self-aware. Nature created an aspect of nature separate from itself. We are creatures that should not exist by natural law. We are things that labor under the illusion of having a self; an accretion of sensory experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody."
"To realize that all your life—you know, all your love, all your hate, all your memory, all your pain—it was all the same thing. It was all the same dream. A dream that you had inside a locked room. A dream about being a person. And like a lot of dreams there's a monster at the end of it."
For more Dennett, see this Stanford Humanities Center lecture (iTunes video).
NYTimes: ... The new book, largely adapted from previous writings, is also a lively primer on the radical answers Mr. Dennett has elaborated to the big questions in his nearly five decades in philosophy, delivered to a popular audience in books like “Consciousness Explained” (1991), “Darwin’s Dangerous Idea” (1995) and “Freedom Evolves.”
The mind? A collection of computerlike information processes, which happen to take place in carbon-based rather than silicon-based hardware.
The self? Simply a “center of narrative gravity,” a convenient fiction that allows us to integrate various neuronal data streams.
The elusive subjective conscious experience — the redness of red, the painfulness of pain — that philosophers call qualia? Sheer illusion.
Human beings, Mr. Dennett said, quoting a favorite pop philosopher, Dilbert, are “moist robots.”
“I’m a robot, and you’re a robot, but that doesn’t make us any less dignified or wonderful or lovable or responsible for our actions,” he said. “Why does our dignity depend on our being scientifically inexplicable?”
If he hadn’t grown up in an academic family, Mr. Dennett likes to say, he probably would’ve been an engineer. From his beginnings in the philosophical hothouses of early 1960s Harvard and Oxford, he had a feeling of being out of step joined by a precocious self-confidence.
As an undergraduate, he transferred from Wesleyan University to Harvard so he could study with the great logician W. V. O. Quine and explain to him why he was wrong. “Sheer sophomoric overconfidence,” Mr. Dennett recalled.
As a doctoral student at Oxford, then the center of the philosophical universe, he studied with the eminent natural-language philosopher Gilbert Ryle but increasingly found himself drawn to a more scientific view of the mind.
“I vividly recall sitting with my landlord’s son, a medical student, and asking him, ‘What is the brain made of?’ ” Mr. Dennett said. “He drew me a simple picture of a neuron, and pretty soon I was off to the races.”
In 1969, Mr. Dennett began keeping his “Philosophical Lexicon,” a dictionary of cheeky pseudo-terms playing on the names of mostly 20th-century philosophers, including himself. (“dennett: an artificial enzyme used to curdle the milk of human intentionality.”) Today, his impatience with the imaginary games philosophers play — “chmess” instead of chess, he calls it — and his preference for the company of scientists lead some to question if he’s still a philosopher at all.
“I’m still proud to call myself a philosopher, but I’m not their kind of philosopher, that’s for sure,” he said. The new book reflects Mr. Dennett’s unflagging love of the fight, including some harsh whacks at longtime nemeses like the paleontologist Stephen Jay Gould — accused of practicing a genus of dirty intellectual tricks Mr. Dennett calls “goulding” — that some early reviewers have already called out as unsporting. (Mr. Gould died in 2002.)
Mr. Dennett also devotes a long section to a rebuttal of the famous Chinese Room thought experiment, developed by 30 years ago by the philosopher John Searle, another old antagonist, as a riposte to Mr. Dennett’s claim that computers could fully mimic consciousness.
Clinging to the idea that the mind is more than just the brain, Mr. Dennett said, is “profoundly naïve and anti-scientific.”
“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”
Wigner on Einstein and von Neumann:
... But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original.
From Schwinger's Feynman eulogy:
"An honest man, the outstanding intuitionist of our age, and a prime example of what may lie in store for anyone who dares to follow the beat of a different drum."
Wikipedia: ..."if the brain is all about making connections, why is it that it's evolved with this whopping divide down the middle?"
... chicks which use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, "The left hemisphere has its own agenda, to manipulate and use the world"; its world view is essentially that of a mechanism. The right has a broader outlook, "has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values."
... "The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways---in order to be able to use what it understands of the world and to be able to manipulate the world---it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain" [the left hemisphere]. Though he sees this as an essential "double act", McGilchrist points to the problem that the left hemisphere has a "narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful" and to the problem of the left hemisphere's lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.[8] The neuroscientists Deglin and Kinsbourne, for example, conducted experiments which involved temporarily deactivating one of the brain's hemispheres. In their research they found that "when completely false propositions are put to the left hemisphere it accepts them as valid because the internal structure of the argument is valid." However, the right hemisphere knows from experience that the propositions are false.
I've followed this area a bit since learning about Roger Sperry's breakthrough experiments, done at Caltech:
Roger Sperry: ... In his Nobel-winning work, Sperry and Gazzaniga tested four out of ten patients who had undergone an operation developed in 1940 by William Van Wagenen, a neurosurgeon in Rochester, NY.[6] The surgery, designed to treat epileptics with intractable grand mal seizures, involves severing the corpus callosum, the area of the brain used to transfer signals between the right and left hemispheres. Sperry and his colleagues tested these patients with tasks that were known to be dependent on specific hemispheres of the brain and demonstrated that the two halves of the brain may each contain consciousness. In his words, each hemisphere is "indeed a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing, and emoting, all at a characteristically human level, and . . . both the left and the right hemisphere may be conscious simultaneously in different, even in mutually conflicting, mental experiences that run along in parallel."
A problem we face in psychometrics is that it is much easier to measure left-brain ability than right-brain ability ...
Brian Christian, author of The Most Human Human, tells interviewer Leonard Lopate what it's like to be a participant in the Loebner Prize competition, an annual version of the Turing Test. See also Christian's article, excerpted below.
Atlantic Monthly: ... The first Loebner Prize competition was held on November 8, 1991, at the Boston Computer Museum. In its first few years, the contest required each program and human confederate to choose a topic, as a means of limiting the conversation. One of the confederates in 1991 was the Shakespeare expert Cynthia Clay, who was, famously, deemed a computer by three different judges after a conversation about the playwright. The consensus seemed to be: “No one knows that much about Shakespeare.” (For this reason, Clay took her misclassifications as a compliment.)
... Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:
The human being is the only animal that ______. The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.
We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.
We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?
“Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.
Where is the keep of our selfhood?
The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math.
... In May 1989, Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online an Eliza-style program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)
Returning to the lab the next morning, Humphrys was stunned to find the log, and felt a strange, ambivalent emotion. His program might have just shown how to pass the Turing Test, he thought—but the evidence was so profane that he was afraid to publish it. ...
Perhaps one of the lessons that MGonz illustrates is that you can appear more intelligent by interacting confidently and aggressively :-)
To find a 90 minute podcast of this gathering, which is remarkable for the quality of the speeches given in honor of philosopher John Searle, search under "searle 50 berkeley" at iTunes U (or follow this link).
John Searle’s 50 Years at Berkeley—A Celebration
A celebration of John Searle’s 50 years of distinguished service to the UC Berkeley campus, with reflections by Tom Nagel, Barry Stroud, Robert Cole, Alex Pines, Peter Hanks, and Maya Kronfeld.
While I disagree strongly with Searle's most famous philosophical construct -- the so called Chinese room argument against strong AI (see also here) -- I've always found his writing and argumentation to be exceptionally clear, at least for a philosopher ;-)
The Times has an article about Jeff Bezos' Mechanical Turk project, which lets machines outsource certain tasks to humans. (The orginal mechanical Turk was an 18th century hoax in which a hidden human operated a chess-playing automaton.) As Bezos describes,
“Normally, a human makes a request of a computer, and the computer does the computation of the task,” he said. “But artificial artificial intelligences like Mechanical Turk invert all that. The computer has a task that is easy for a human but extraordinarily hard for the computer. So instead of calling a computer service to perform the function, it calls a human.”
...The company opened Mechanical Turk as a public site in November 2005. Today, there are more than 100,000 “Turk Workers” in more than 100 countries who earn micropayments in exchange for completing a wide range of quick tasks called HITs, for human intelligence tasks, for various companies.
The Times writer Jason Pontin (who is also editor and publisher of MIT's Technology Review), gives Turk working a try, and finds it disorienting:
What is it like to be an individual component of these digital, collective minds?
To find out, I experimented. After registering at www.mturk.com, I was confronted with a table of HITs that I could perform, together with the price that I would be paid. I first accepted a job from ContentSpooling.net that asked me to write three titles for an article about annuities and their use in retirement planning. Then I viewed a series of images apparently captured from a vehicle moving through the gray suburbs of North London, and, at the request of Geospatial Vision, a division of the British technology company Oxford Metrics Group, identified objects like road signs and markings.
For all this, my Amazon account was credited the lordly sum of 12 cents. The entire experience lasted no more than 15 minutes, and from my point of view, as an occluded part of the hive-mind, it made no sense at all.
This is reminiscent of philospher John Searle's thought experiment called the Chinese Room, in which he posits a large team of humans implementing an algorithm that translates Chinese to English. Since each human performs only a small task (e.g., sorting acording to a rule set), none have any understanding of the overall process. Searle asks where, exactly, does the understanding of Chinese and English reside in this device? Searle considered his thought experiment as evidence against strong AI, whereas I just consider Searle to be confused. It's obvious that a Turk worker might be a small cog in some larger process that "understands" the world and processes information in a useful way. This depends not at all on what the little cog understands or does not understand.