Showing posts with label alan turing. Show all posts
Showing posts with label alan turing. Show all posts

Thursday, January 31, 2019

Manifold Show, episode 2: Bobby Kasthuri and Brain Mapping




Show Page    YouTube Channel

Our plan is to release new episodes on Thursdays, at a rate of one every week or two.

We've tried to keep the shows at roughly one hour length -- is this necessary, or should we just let them go long?
Corey and Steve are joined by Bobby Kasthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.

Bobby Kasthuri Bio
https://microbiome.uchicago.edu/directory/bobby-kasthuri 

The Physicist and the Neuroscientist: A Tale of Two Connectomes
http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

COMPUTING MACHINERY AND INTELLIGENCE, A. M. Turing https://www.csee.umbc.edu/courses/471/papers/turing.pdf


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally
resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Thursday, May 10, 2018

Google Duplex and the (short) Turing Test

Click this link and listen to the brief conversation. No cheating! Which speaker is human and which is a robot?

I wrote about a "strong" version of the Turing Test in this old post from 2004:
When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.
AI isn't ready to pass the strong Turing Test, yet. But humans will become increasing unsure about the machine intelligences proliferating in the world around them.

The key to all AI advances is to narrow the scope of the problem so that the machine can deal with it. Optimization/Learning in lower dimensional spaces is much easier than in high dimensional spaces. In sufficiently narrow situations (specific tasks, abstract games of strategy, etc.), machines are already better than humans.

Google AI Blog:
Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone

...Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Here are examples of Duplex making phone calls (using different voices)...
I switched from iOS to Android in the last year because I could see that Google Assistant was much better than Siri and was starting to have very intriguing capabilities!


Saturday, March 03, 2012

"Only he was fully awake"

A great quote from this review of George Dyson's Turing's Cathedral. Despite the title, von Neumann is the central character.
... mathematician John von Neumann, ... was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake."
More Wigner quotes:
I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me.

... But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original.
Von Neumann in action.

I'm doing my best to increase the number of future humans who will be "fully awake" ;-) My current estimate is that one or two hundred common mutations (affecting only a small subset of the thousands of loci that influence intelligence) are what separate an ordinary person from a vN. There's plenty of additive variance to be exploited, and many desirable human phenotypes that have never been realized. (Also some dangerous ones.)
... The most extensive selection experiment, at least the one that has continued for the longest time, is the selection for oil and protein content in maize (Dudley 2007). These experiments began near the end of the nineteenth century and still continue; there are now more than 100 generations of selection. Remarkably, selection for high oil content and similarly, but less strikingly, selection for high protein, continue to make progress. There seems to be no diminishing of selectable variance in the population. The effect of selection is enormous: the difference in oil content between the high and low selected strains is some 32 times the original standard deviation.

Sunday, February 26, 2012

Turing and wavefunction collapse

Some interesting discussion by Turing biographer and mathematical physicist Andrew Hodges of Turing's early thoughts about the brain as a quantum computer and the possible connection to quantum measurement. I doubt the brain makes use of quantum coherence (i.e., it can probably be efficiently simulated by a Turing machine), but nevertheless these thoughts led Turing to the fundamental problems of quantum mechanics. He came close to noticing that a quantum computer might be outside the class of machines that a Universal Turing Machine could efficiently simulate.

Hodges' Enigma (biography of Turing) is an incredible triumph. Turing's life was tragic, but at least he was granted a biographer worthy of his contributions to mankind.

A shorter precis of Turing's life and thought, also by Hodges, can be found here.
Hodges: ... Turing described the universal machine property, applying it to the brain, but said that its applicability required that the machine whose behaviour is to be imitated
…should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.
... Turing here is discussing the possibility that, when seen as as a quantum-mechanical machine rather than a classical machine, the Turing machine model is inadequate. The correct connection to draw is not with Turing's 1938 work on ordinal logics, but with his knowledge of quantum mechanics from Eddington and von Neumann in his youth. Indeed, in an early speculation, influenced by Eddington, Turing had suggested that quantum mechanical physics could yield the basis of free-will (Hodges 1983, p. 63). Von Neumann's axioms of quantum mechanics involve two processes: unitary evolution of the wave function, which is predictable, and the measurement or reduction operation, which introduces unpredictability. Turing's reference to unpredictability must therefore refer to the reduction process. The essential difficulty is that still to this day there is no agreed or compelling theory of when or how reduction actually occurs. (It should be noted that ‘quantum computing,’ in the standard modern sense, is based on the predictability of the unitary evolution, and does not, as yet, go into the question of how reduction occurs.) It seems that this single sentence indicates the beginning of a new field of investigation for Turing, this time into the foundations of quantum mechanics. In 1953 Turing wrote to his friend and student Robin Gandy that he was ‘trying to invent a new Quantum Mechanics but it won't really work.’

[ Advances in the theory of decoherence and in experimental abilities to precisely control quantum systems have led to a much better understanding of quantum measurement. The unanswered question is, of course, whether wavefunctions actually collapse or whether they merely appear to do so. ]

At Turing's death in June 1954, Gandy reported in a letter to Newman on what he knew of Turing's current work (Gandy 1954). He wrote of Turing having discussed a problem in understanding the reduction process, in the form of

…‘the Turing Paradox’; it is easy to show using standard theory that if a system start in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, 1 second, tends to one as N tends to infinity; i.e. that continual observation will prevent motion. Alan and I tackled one or two theoretical physicists with this, and they rather pooh-poohed it by saying that continual observation is not possible. But there is nothing in the standard books (e.g., Dirac's) to this effect, so that at least the paradox shows up an inadequacy of Quantum Theory as usually presented. ...
[ This is sometimes referred to as the Quantum Zeno Effect. A modern understanding of measurement incorporating decoherence shows that this is not really a paradox. ]
Turing as polymath:
In a similar way Turing found a home in Cambridge mathematical culture, yet did not belong entirely to it. The division between 'pure' and 'applied' mathematics was at Cambridge then as now very strong, but Turing ignored it, and he never showed mathematical parochialism. If anything, it was the attitude of a Russell that he acquired, assuming that mastery of so difficult a subject granted the right to invade others. Turing showed little intellectual diffidence once in his stride: in March 1933 he acquired Russell's Introduction to Mathematical Philosophy, and on 1 December 1933, the philosopher R. B. Braithwaite minuted in the Moral Science Club records: 'A. M. Turing read a paper on 'Mathematics and logic.' He suggested that a purely logistic view of mathematics was inadequate; and that mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.' At the same time he was studying von Neumann's 1932 Grundlagen den Quantenmechanik. Thus, it may be that Eddington's claims for quantum mechanics had encouraged the shift of Turing's interest towards logical foundations. And it was logic that made Alan Turing's name.

Friday, July 08, 2011

Creators



The other day at the bookstore I skimmed Jane Smiley's book The Man Who Invented the Computer, about physicist Vincent Atanasoff and the early history of electronic computing. (A replica of Atanasoff's machine, the ABC, is shown above.) Atanasoff was named the inventor of the first automatic electronic digital computer as a result of the 1973 patent suit Honeywell v. Sperry Rand. In that decision, the judge found that "Eckert and Mauchly [creators of the ENIAC] did not themselves first invent the automatic electronic digital computer, but instead derived that subject matter from one Dr. John Vincent Atanasoff". I was already familiar with the Atanasoff story because he taught at Iowa State University, as did my father.

In the book, Smiley also profiles a number of early pioneers of computing who were contemporaries of Atanasoff. Turing and von Neumann are well known, while John Mauchly and J. Presper Eckert, the men who built the ENIAC, are not. I was intrigued by the biographical details Smiley uncovered about these men. All of the key figures in the invention of the electronic computer were of exceptional ability -- from that small sliver of humanity that create value, albeit often without capturing the associated financial rewards.

From their respective Wikipedia entries:

Mauchly was born on August 30, 1907 in Cincinnati, Ohio. He grew up in Chevy Chase, Maryland while his father Sebastian Mauchly was a physicist at the Carnegie Institute of Washington, D.C. He earned the Engineering Scholarship of the State of Maryland, which enabled him to enroll at Johns Hopkins University in the fall of 1925 as an undergraduate in the Electrical Engineering program. In 1927 he enrolled directly in a Ph.D. program there and transferred to the graduate physics program of the university. He completed his Ph.D. in 1932 and became a professor of physics at Ursinus College near Philadelphia, where he taught from 1933 to 1941. At Ursinus he worked for several years developing a digital electronic computing machine to test the theory that solar fluctuations, sun spots in particular, affect our weather. ... In 1942 Mauchly wrote a memo proposing the building of a general-purpose electronic computer. The proposal, which circulated within the Moore School (but the significance of which was not immediately recognized), emphasized the enormous speed advantage that could be gained by using digital electronics with no moving parts. ... Mauchly led the conceptual design while Eckert led the hardware engineering on ENIAC.

Eckert initially enrolled in the University of Pennsylvania's Wharton School to study business at the encouragement of his parents, but in 1937 transferred to Penn's Moore School of Electrical Engineering. In 1940, at age 21, Eckert applied for his first patent, "Light Modulating Methods and Apparatus".[2] At the Moore School, Eckert participated in research on radar timing, made improvements to the speed and precision of the Moore School's differential analyzer, and in 1941 became a laboratory assistant for a defense training summer course in electronics offered through the Moore School by the United States Department of War.

Atanasoff: ... At the age of nine he learned to use a slide rule, followed shortly by the study of logarithms, and subsequently completed high school at Mulberry High School in two years. In 1925, Atanasoff received his bachelor of science degree in electrical engineering from the University of Florida, graduating with straight A's. He continued his education at Iowa State College and in 1926 earned a master's degree in mathematics. He completed his formal education in 1930 by earning a Ph.D. in theoretical physics from the University of Wisconsin–Madison with his thesis, The Dielectric Constant of Helium. [Under van Vleck, who later moved to Harvard and won a Nobel prize.] Upon completion of his doctorate, Atanasoff accepted an assistant professorship at Iowa State College in mathematics and physics.

For those interested in the credit dispute between Atanasoff and Mauchly-Eckert, the following is from the Atanasoff Wikipedia entry. Mauchly apparently knew all about Atanasoff's device before circulating his proposal in 1942. In Mauchly's favor, his was a general purpose (Turing complete) device, whereas Atanasoff's ABC was a special purpose device for solving systems of linear equations.

Atanasoff first met Mauchly at the December 1940 meeting of the American Association for the Advancement of Science in Philadelphia, where Mauchly was demonstrating his "harmonic analyzer", an analog calculator for analysis of weather data. Atanasoff told Mauchly about his new digital device and invited him to see it. ...

In June 1941 Mauchly visited Atanasoff in Ames, Iowa for four days, staying as his houseguest. Atanasoff and Mauchly discussed the prototype ABC, examined it, and reviewed Atanasoff's design manuscript.

Wednesday, April 06, 2011

Man vs Machine: inside the Turing Test




Brian Christian, author of The Most Human Human, tells interviewer Leonard Lopate what it's like to be a participant in the Loebner Prize competition, an annual version of the Turing Test. See also Christian's article, excerpted below.

Atlantic Monthly: ... The first Loebner Prize competition was held on November 8, 1991, at the Boston Computer Museum. In its first few years, the contest required each program and human confederate to choose a topic, as a means of limiting the conversation. One of the confederates in 1991 was the Shakespeare expert Cynthia Clay, who was, famously, deemed a computer by three different judges after a conversation about the playwright. The consensus seemed to be: “No one knows that much about Shakespeare.” (For this reason, Clay took her misclassifications as a compliment.)

... Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:

The human being is the only animal that ______.
The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.

We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.

We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?

“Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.

Where is the keep of our selfhood?

The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math.

... In May 1989, Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online an Eliza-style program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)

Returning to the lab the next morning, Humphrys was stunned to find the log, and felt a strange, ambivalent emotion. His program might have just shown how to pass the Turing Test, he thought—but the evidence was so profane that he was afraid to publish it. ...

Perhaps one of the lessons that MGonz illustrates is that you can appear more intelligent by interacting confidently and aggressively :-)

Wednesday, August 01, 2007

Turing: physics and cryptography

People often ask me what a physicist is doing in information security. Here's a partial answer, from Alan Turing:

“There is a remarkably close parallel between the problems of the physicist and those of the cryptographer. The system on which a message is enciphered corresponds to the laws of the universe, the intercepted messages to the evidence available, the keys for a day or a message to important constants which have to be determined. The correspondence is very close, but the subject matter of cryptography is very easily dealt with by discrete machinery, physics not so easily.”

Friday, November 19, 2004

Generalized Turing test

I have a bet with one of my former PhD students regarding a strong version of the Turing test. Let me explain what I mean by "strong" version. Turing originally defined his test of artificial intelligence as follows: a tester communicates in some blind way (such as by typing on a terminal) with a second party; if the tester cannot tell whether the second party is a human or a computer, the computer will have passed the test and therefore exhibits AI. When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.

Now consider the moment when a machine passes the Turing test. We would replicate this machine many times through mass production, and set this AI army to solving the world's problems (and making even smarter versions of themselves). Of course, not having to sleep, they would make tremendous progress, leading eventually to a type of machine intelligence that would be incomprehensible to mere humans. In science fiction this eventuality is often referred to as the "singularity" in technological development - when the rate of progress becomes so rapid we humans can't follow it anymore.

Of course the catch is getting some machine to the threshold of passing the Turing test. My former student, using Moore's law as a guide (and the related exponential growth rates in bandwidth and storage capacity), is confident that 50 years will be enough time. Rough calculations suggest we aren't more than a few decades from reaching hardware capabilities matching those of the brain. Software optimization is of course another matter, and our views differ on how hard that part of the problem will be. (The few academic CS people who I have gotten to give their opinions on this seem to agree with me, although I have no substantial sampling.)

I'd be shocked if we get there within 50 years, although it certainly would be fun :-)

Blog Archive

Labels