Showing posts with label turing test. Show all posts
Showing posts with label turing test. Show all posts

Thursday, January 31, 2019

Manifold Show, episode 2: Bobby Kasthuri and Brain Mapping




Show Page    YouTube Channel

Our plan is to release new episodes on Thursdays, at a rate of one every week or two.

We've tried to keep the shows at roughly one hour length -- is this necessary, or should we just let them go long?
Corey and Steve are joined by Bobby Kasthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.

Bobby Kasthuri Bio
https://microbiome.uchicago.edu/directory/bobby-kasthuri 

The Physicist and the Neuroscientist: A Tale of Two Connectomes
http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

COMPUTING MACHINERY AND INTELLIGENCE, A. M. Turing https://www.csee.umbc.edu/courses/471/papers/turing.pdf


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally
resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Thursday, May 10, 2018

Google Duplex and the (short) Turing Test

Click this link and listen to the brief conversation. No cheating! Which speaker is human and which is a robot?

I wrote about a "strong" version of the Turing Test in this old post from 2004:
When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.
AI isn't ready to pass the strong Turing Test, yet. But humans will become increasing unsure about the machine intelligences proliferating in the world around them.

The key to all AI advances is to narrow the scope of the problem so that the machine can deal with it. Optimization/Learning in lower dimensional spaces is much easier than in high dimensional spaces. In sufficiently narrow situations (specific tasks, abstract games of strategy, etc.), machines are already better than humans.

Google AI Blog:
Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone

...Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Here are examples of Duplex making phone calls (using different voices)...
I switched from iOS to Android in the last year because I could see that Google Assistant was much better than Siri and was starting to have very intriguing capabilities!


Friday, July 03, 2015

Humans on AMC



This is a new AMC series, done in collaboration with Channel 4 in the UK. I just watched the first episode and it is really good.

Saturday, June 22, 2013

Android Dreams

These videos will be very interesting to Blade Runner fans. In the first, Dick talks about
"... the problem of differentiating an authentic human being from the reflex-machine which I call an android... The word android is a metaphor for someone who is physiologically human but psychologically ... non-human. I got interested in this when I was doing research for Man in the High Castle [excellent alternative history novel in which the Japanese and Germans won WWII] and I was studying the Nazi mentality. I discovered that although these people were highly intelligent they were definitely deficient in some kind of ... appropriate affect or appropriate emotions."
Hence the "affect Turing test" used on androids in Blade Runner / Do Androids Dream of Electric Sheep.





A lovely Sean Young, at the beginning of her movie career, talks about the challenges of playing an android.




The love scene between Rachael and Deckard, uncut.




Rutger Hauer: "Harrison is the villain." I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I've watched C beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time like tears in rain.

Monday, April 18, 2011

Gopnik on machine intelligence

Adam Gopnik on machine intelligence, including a review of Brian Christian's book on the Turing Test, previously discussed here.

New Yorker: ... We have been outsourcing our intelligence, and our humanity, to machines for centuries. They have long been faster, bigger, tougher, more deadly. Now they are quicker at calculation and infinitely more adept at memory than we have ever been. And so now we decide that memory and calculation are not really part of mind. It's not just that we move the goalposts; we mock the machines' touchdowns as they spike the ball. We place the communicative element of language above the propositional and argumentative element, not because it matters more but because it’s all that’s left to us. ... Doubtless, even as the bots strap us down to the pods and insert the tubes in our backs, we'll still be chuckling, condescendingly, "They look like they're thinking, sure, very impressive -- but they don't have the affect, the style, you know, the vibe of real intelligence ..." What do we really mean by "smart"? The ability to continually diminish the area of what we mean by it.

Wednesday, April 06, 2011

Man vs Machine: inside the Turing Test




Brian Christian, author of The Most Human Human, tells interviewer Leonard Lopate what it's like to be a participant in the Loebner Prize competition, an annual version of the Turing Test. See also Christian's article, excerpted below.

Atlantic Monthly: ... The first Loebner Prize competition was held on November 8, 1991, at the Boston Computer Museum. In its first few years, the contest required each program and human confederate to choose a topic, as a means of limiting the conversation. One of the confederates in 1991 was the Shakespeare expert Cynthia Clay, who was, famously, deemed a computer by three different judges after a conversation about the playwright. The consensus seemed to be: “No one knows that much about Shakespeare.” (For this reason, Clay took her misclassifications as a compliment.)

... Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:

The human being is the only animal that ______.
The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.

We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.

We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?

“Sometimes it seems,” says Douglas Hofstadter, a Pulitzer Prize–winning cognitive scientist, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, like a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if everything that we thought hinged on thinking turns out to not involve it, then … what is thinking? It would seem to reduce to either an epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.

Where is the keep of our selfhood?

The story of the 21st century will be, in part, the story of the drawing and redrawing of these battle lines, the story of Homo sapiens trying to stake a claim on shifting ground, flanked by beast and machine, pinned between meat and math.

... In May 1989, Mark Humphrys, a 21-year-old University College Dublin undergraduate, put online an Eliza-style program he’d written, called “MGonz,” and left the building for the day. A user (screen name “Someone”) at Drake University in Iowa tentatively sent the message “finger” to Humphrys’s account—an early-Internet command that acted as a request for basic information about a user. To Someone’s surprise, a response came back immediately: “cut this cryptic shit speak in full sentences.” This began an argument between Someone and MGonz that lasted almost an hour and a half. (The best part was undoubtedly when Someone said, “you sound like a goddamn robot that repeats everything.”)

Returning to the lab the next morning, Humphrys was stunned to find the log, and felt a strange, ambivalent emotion. His program might have just shown how to pass the Turing Test, he thought—but the evidence was so profane that he was afraid to publish it. ...

Perhaps one of the lessons that MGonz illustrates is that you can appear more intelligent by interacting confidently and aggressively :-)

Sunday, December 09, 2007

Meet the bots: love and the Turing test

Nowadays you never know when you are interfacing with an alien machine intelligence :-)

Online chat, poker and chess are all infested by bots or other machine intelligences. Even those seeking romance can't be sure who or what is on the other end...

CNET: A program that can mimic online flirtation and then extract personal information from its unsuspecting conversation partners is making the rounds in Russian chat forums, according to security software firm PC Tools.

The artificial intelligence of CyberLover's automated chats is good enough that victims have a tough time distinguishing the "bot" from a real potential suitor, PC Tools said. The software can work quickly too, establishing up to 10 relationships in 30 minutes, PC Tools said. It compiles a report on every person it meets complete with name, contact information, and photos.

"As a tool that can be used by hackers to conduct identity fraud, CyberLover demonstrates an unprecedented level of social engineering," PC Tools senior malware analyst Sergei Shevchenko said in a statement.

Among CyberLover's creepy features is its ability to offer a range of different profiles from "romantic lover" to "sexual predator." It can also lead victims to a "personal" Web site, which could be used to deliver malware, PC Tools said.

Wednesday, July 25, 2007

Man vs machine: live poker!

This blog has live updates from the competition. See also here for a video clip introduction. It appears the machine Polaris is ahead of the human team at the moment.

The history of AI tells us that capabilities initially regarded as sure signs of intelligence ("machines will never play chess like a human!") are discounted soon after machines master them. Personally I favor a strong version of the Turing test: interaction which takes place over a sufficiently long time that the tester can introduce new ideas and watch to see if learning occurs. Can you teach the machine quantum mechanics? At the end will it be able to solve some novel problems? Many humans would fail this Turing test :-)

Earlier post on bots invading online poker.

2007

World-Class Poker Professionals Phil Laak and Ali Eslami
versus
Computer Poker Champion Polaris (University of Alberta)

Can a computer program bluff? Yes -- probably better than any human. Bluff, trap, check-raise bluff, big lay-down -- name your poison. The patience of a monk or the fierce aggression of a tiger, changing gears in a single heartbeat. Polaris can make a pro's head spin.

Psychology? That's just a human weakness.

Odds and calculation? Computers can do a bit of that.

Intimidation factor and mental toughness? Who would you choose?

Does the computer really stand a chance? Yes, this one does. It learns, adapts, and exploits the weaknesses of any opponent. Win or lose, it will put up one hell of a fight.

Many of the top pros, like Chris "Jesus" Ferguson, Paul Phillips, Andy Bloch and others, already understand what the future holds. Now the rest of the poker world will find out.

Friday, November 19, 2004

Generalized Turing test

I have a bet with one of my former PhD students regarding a strong version of the Turing test. Let me explain what I mean by "strong" version. Turing originally defined his test of artificial intelligence as follows: a tester communicates in some blind way (such as by typing on a terminal) with a second party; if the tester cannot tell whether the second party is a human or a computer, the computer will have passed the test and therefore exhibits AI. When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.

Now consider the moment when a machine passes the Turing test. We would replicate this machine many times through mass production, and set this AI army to solving the world's problems (and making even smarter versions of themselves). Of course, not having to sleep, they would make tremendous progress, leading eventually to a type of machine intelligence that would be incomprehensible to mere humans. In science fiction this eventuality is often referred to as the "singularity" in technological development - when the rate of progress becomes so rapid we humans can't follow it anymore.

Of course the catch is getting some machine to the threshold of passing the Turing test. My former student, using Moore's law as a guide (and the related exponential growth rates in bandwidth and storage capacity), is confident that 50 years will be enough time. Rough calculations suggest we aren't more than a few decades from reaching hardware capabilities matching those of the brain. Software optimization is of course another matter, and our views differ on how hard that part of the problem will be. (The few academic CS people who I have gotten to give their opinions on this seem to agree with me, although I have no substantial sampling.)

I'd be shocked if we get there within 50 years, although it certainly would be fun :-)

Blog Archive

Labels