Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts
Thursday, March 12, 2020
A.J. Robison on the Neural Basis of Sex Differences in Depression - Manifold #37
Corey and Steve talk with MSU Neuroscientist A.J. Robison about why females may be more likely to suffer from depression than males. A.J. reviews past findings that low testosterone and having a smaller hippocampus may predict depression risk. He explains how a serendipitous observation opened up his current line of research and describes tools he uses to study neural circuits. Steve asks about the politics of studying sex differences and tells of a start up using CRISPR to attack heart disease. The three end with a discussion of the psychological effects of ketamine, testosterone and deep brain stimulation.
01:18 - Link between antidepressants, neurogenesis and reducing risk of depression
13:54 - Nature of Mouse models
23:19 - Depressive symptoms in mouse
32:36 - Liz Williams' serendipitous finding and the issue of biological sex
45:47 - AJ’s research plans for circuit specific gene editing in the mouse brain and a start up’s plan to use it to tackle human cardiovascular disease
59:07 - Psychological and Neurological Effects of Ketamine. Testosterone and Deep Brain Stimulation
Transcript
Robison Lab at MSU
Androgen-dependent excitability of mouse ventral hippocampal afferents to nucleus accumbens underlies sex-specific susceptibility to stress
Emerging role of viral vectors for circuit-specific gene interrogation and manipulation in rodent brain
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Thursday, January 30, 2020
Steven Broglio on Concussions, Football and Informed Choice - Manifold Podcast #31
Steve and Corey talk with Steven Broglio, Director of the Michigan Concussion Center, about concussion risk, prevention and treatment. Broglio describes how the NCAA emerged from the deaths that almost led Theodore Roosevelt to outlaw college football. He also explains recent findings on CTE, why females may be at greater concussion risk, and why sleep is critical to avoiding long-term brain injury. They discuss how new rules probably make football safer and debate why New England is so down on kids playing football. Steve wonders whether skills are in decline now that some schools have eliminated “contact” in practices.
Steven Broglio (Faculty Profile)
Michigan Concussion Center
NeuroTrauma Research Laboratory
NCAA-DoD Grand Alliance: Concussion Assessment, Research, and Education (CARE)
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Wednesday, November 27, 2019
Manifold #24: Jason Snyder on Neurogenesis
Happy Thanksgiving! :-)
Steve and Corey talk to Jason Snyder (University of British Columbia) about a fundamental question of neuroscience: Do humans grow new neurons as adults? The dogma that humans do not, gave way to the dogma that they do, which is now being questioned. Adult neurogenesis has been associated with learning, better cognitive function and resistance to depression. Jason suggests that a simple error of treating young mice as models for adult humans led to excessive optimism regarding the potential for later neuronal growth. Recent findings suggest that adults grow few, if any, new neurons but that what little neurogenesis occurs can probably be enhanced by exercise.
Transcript
The Synder Lab
Warren Sturgis McCulloch Interview (1969)
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Thursday, September 19, 2019
Manifold Podcast #19: Ted Chiang on Free Will, Time Travel, Many Worlds, Genetic Engineering, and Hard Science Fiction
Steve and Corey speak with Ted Chiang about his recent story collection Exhalation and his inaugural essay for the New York Times series, Op-Eds from the Future. Chiang has won Nebula and Hugo awards for his widely influential science fiction writing. His short story Story of Your Life, became the film Arrival (2016). Their discussion explores the scientific and philosophical ideas in Ted's work, including whether free will is possible, and implications of AI, neuroscience, and time travel. Ted explains why his skepticism about whether the US is truly a meritocracy leads him to believe that the government-funded genetic modification he envisages in his Op-Ed would not solve the problem of inequality.
Transcript
Ted Chiang's New York Times Op-Ed From the Future
Exhalation by Ted Chiang
Stories of Your Life and Others by Ted Chiang
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Transcript
Ted Chiang's New York Times Op-Ed From the Future
Exhalation by Ted Chiang
Stories of Your Life and Others by Ted Chiang
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Thursday, July 11, 2019
Manifold Episode #14: Stuart Firestein on Why Ignorance and Failure Lead to Scientific Progress
Steve and Corey speak with Stuart Firestein (Professor of Neuroscience at Columbia University, specializing in the olfactory system) about his two books Ignorance: How It Drives Science, and Failure: Why Science Is So Successful. Stuart explains why he thinks that it is a mistake to believe that scientists make discoveries by following the “scientific method” and what he sees as the real relationship between science and art. We discuss Stuart’s recent research showing that current models of olfactory processing are wrong, while Steve delves into the puzzling infinities in calculations that led to the development of quantum electrodynamics. Stuart also makes the case that the theory of intelligent design is more intelligent than most scientists give it credit for and that it would be wise to teach it in science classes.
Stuart Firestein
Failure: Why Science Is so Successful
Ignorance: How it drives science
Transcript
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Tuesday, April 23, 2019
Backpropagation in the Brain? Part 2
If I understand correctly the issue is how to realize something like backprop when most of the information flow is feed-forward (as in real neurons). How do you transport weights "non-locally"? The L2 optimization studied here doesn't actually transport weights. Rather, the optimized solution realizes the same set of weights in two places...
See earlier post Backpropagation in the Brain? Thanks for STS for the reference.
Center for Brains, Minds and Machines (CBMM)
Published on Apr 3, 2019
Speaker: Dr. Jon Bloom, Broad Institute
Abstract: When trained to minimize reconstruction error, a linear autoencoder (LAE) learns the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this talk, I'll explain how this observation became the focus of a project on representation learning of neurons using single-cell RNA data. I'll then share how this focus led us to a satisfying conversation between numerical analysis, algebraic topology, random matrix theory, deep learning, and computational neuroscience. We'll see that an L2-regularized LAE learns the principal directions as the left singular vectors of the decoder, providing a simple and scalable PCA algorithm related to Oja's rule. We'll use the lens of Morse theory to smoothly parameterize all LAE critical manifolds and the gradient trajectories between them; and see how algebra and probability theory provide principled foundations for ensemble learning in deep networks, while suggesting new algorithms. Finally, we'll come full circle to neuroscience via the "weight transport problem" (Grossberg 1987), proving that L2-regularized LAEs are symmetric at all critical points. This theorem provides local learning rules by which maximizing information flow and minimizing energy expenditure give rise to less-biologically-implausible analogues of backproprogation, which we are excited to explore in vivo and in silico. Joint learning with Daniel Kunin, Aleksandrina Goeva, and Cotton Seed.
Thursday, January 31, 2019
Manifold Show, episode 2: Bobby Kasthuri and Brain Mapping
Show Page YouTube Channel
Our plan is to release new episodes on Thursdays, at a rate of one every week or two.
We've tried to keep the shows at roughly one hour length -- is this necessary, or should we just let them go long?
Corey and Steve are joined by Bobby Kasthuri, a Neuroscientist at Argonne National Laboratory and the University of Chicago. Bobby specializes in nanoscale mapping of brains using automated fine slicing followed by electron microscopy. Among the topics covered: Brain mapping, the nature of scientific progress (philosophy of science), Biology vs Physics, Is the brain too complex to be understood by our brains? AlphaGo, the Turing Test, and wiring diagrams, Are scientists underpaid? The future of Neuroscience.
Bobby Kasthuri Bio
https://microbiome.uchicago.edu/directory/bobby-kasthuri
The Physicist and the Neuroscientist: A Tale of Two Connectomes
http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html
COMPUTING MACHINERY AND INTELLIGENCE, A. M. Turing https://www.csee.umbc.edu/courses/471/papers/turing.pdf
man·i·fold /ˈmanəˌfōld/ many and various.
In mathematics, a manifold is a topological space that locally
resembles Euclidean space near each point.
Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.
Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.
Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.
Thursday, October 25, 2018
Backpropagation in the Brain?
Ask and ye shall receive :-)
In an earlier post I recommended a talk by Ilya Sutskever of OpenAI (part of an MIT AGI lecture series). In the Q&A someone asks about the status of backpropagation (used for training of artificial deep neural nets) in real neural nets, and Ilya answers that it's currently not known how or whether a real brain does it.
Almost immediately, neuroscientist James Phillips of Janelia provides a link to a recent talk on this topic, which proposes a specific biological mechanism / model for backprop. I don't know enough neuroscience to really judge the idea, but it's nice to see cross-fertilization between in silico AI and real neuroscience.
See here for more from Blake Richards.
In an earlier post I recommended a talk by Ilya Sutskever of OpenAI (part of an MIT AGI lecture series). In the Q&A someone asks about the status of backpropagation (used for training of artificial deep neural nets) in real neural nets, and Ilya answers that it's currently not known how or whether a real brain does it.
Almost immediately, neuroscientist James Phillips of Janelia provides a link to a recent talk on this topic, which proposes a specific biological mechanism / model for backprop. I don't know enough neuroscience to really judge the idea, but it's nice to see cross-fertilization between in silico AI and real neuroscience.
See here for more from Blake Richards.
Tuesday, October 23, 2018
MIT AGI: OpenAI Meta-Learning and Self-Play (Ilya Sutskever)
I recently noticed this lecture series at MIT, focusing on AGI. This talk by Ilya Sutskever (OpenAI) is very good. There are several more in this series: playlist.
In Q&A Sutskever notes that it is not known whether/how human brains do backpropagation, which seems central to training of deep networks. Any neuroscientists out there want to take up this question?
Saturday, September 29, 2018
Intuition and the two brains, revisited
Iain McGilchrist, author of The Master and His Emissary: The Divided Brain and the Making of the Western World, in conversation with Jordan Peterson.
I wrote about McGilchrist in 2012: Intuition and the two brains.
Albert Einstein:Much more here.
“The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.”
Wigner on Einstein and von Neumann:
"But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original."
From Schwinger's Feynman eulogy:
"An honest man, the outstanding intuitionist of our age..."
Feynman:
"We know a lot more than we can prove."
... "if the brain is all about making connections, why is it that it's evolved with this whopping divide down the middle?"
... [chicks] use the eye connected to the left hemisphere to attend to the fine detail of picking seeds from amongst grit, whilst the other eye attends to the broader threat from predators. According to the author, "The left hemisphere has its own agenda, to manipulate and use the world"; its world view is essentially that of a mechanism. The right has a broader outlook, "has no preconceptions, and simply looks out to the world for whatever might be. In other words it does not have any allegiance to any particular set of values."
... "The right hemisphere sees a great deal, but in order to refine it, and to make sense of it in certain ways---in order to be able to use what it understands of the world and to be able to manipulate the world---it needs to delegate the job of simplifying it and turning it into a usable form to another part of the brain" [the left hemisphere]. ... the left hemisphere has a "narrow, decontextualised and theoretically based model of the world which is self consistent and is therefore quite powerful" and to the problem of the left hemisphere's lack of awareness of its own shortcomings; whilst in contrast, the right hemisphere is aware that it is in a symbiotic relationship.
Roger Sperry: ... each hemisphere is "indeed a conscious system in its own right, perceiving, thinking, remembering, reasoning, willing, and emoting, all at a characteristically human level, and . . . both the left and the right hemisphere may be conscious simultaneously in different, even in mutually conflicting, mental experiences that run along in parallel."
Split-brain structure (with the different hemispheres having very distinct structures and morphologies) is common to all higher organisms (as far as I know). Is this structure just an accident of evolution? Or does the (putative) split between a systematizing core and a big-picture intuitive core play an important role in higher cognition?
AGI optimists sometimes claim that deep learning and existing neural net structures are capable of taking us all the way to AGI (human-like cognition and beyond). I think there is a significant chance that neural-architectural structures necessary for, e.g., recurrent memory, meta-reasoning, theory of mind, creative generation of ideas, integration of inferences developed from observation into more general hypotheses/models, etc. still need to be developed. Any step requiring development of novel neural architecture could easily take researchers a decade to accomplish. So a timescale > 30-50 years for AGI, even in highly optimistic scenarios, seems quite possible to me.
Thursday, July 05, 2018
Cognitive ability predicted from fMRI (Caltech Neuroscience)
Caltech researchers used elastic net (L1 and L2 penalization) to train a predictor using cognitive scores and fMRI data from ~900 individuals. The predictor captures about 20% of variance in intelligence; the score correlates a bit more than 0.45 with actual intelligence. This may validate earlier work by Korean researchers in 2015, although the Korean group claimed much higher predictive correlations.
Press release:
Press release:
In a new study, researchers from Caltech, Cedars-Sinai Medical Center, and the University of Salerno show that their new computing tool can predict a person's intelligence from functional magnetic resonance imaging (fMRI) scans of their resting state brain activity. Functional MRI develops a map of brain activity by detecting changes in blood flow to specific brain regions. In other words, an individual's intelligence can be gleaned from patterns of activity in their brain when they're not doing or thinking anything in particular—no math problems, no vocabulary quizzes, no puzzles.Paper:
"We found if we just have people lie in the scanner and do nothing while we measure the pattern of activity in their brain, we can use the data to predict their intelligence," says Ralph Adolphs (PhD '92), Bren Professor of Psychology, Neuroscience, and Biology, and director and Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Caltech Brain Imaging Center.
To train their algorithm on the complex patterns of activity in the human brain, Adolphs and his team used data collected by the Human Connectome Project (HCP), a scientific endeavor funded by the National Institutes of Health (NIH) that seeks to improve understanding of the many connections in the human brain. Adolphs and his colleagues downloaded the brain scans and intelligence scores from almost 900 individuals who had participated in the HCP, fed these into their algorithm, and set it to work.
After processing the data, the team's algorithm was able to predict intelligence at statistically significant levels across these 900 subjects, says Julien Dubois (PhD '13), a postdoctoral fellow at Cedars-Sinai Medical Center. But there is a lot of room for improvement, he adds. The scans are coarse and noisy measures of what is actually happening in the brain, and a lot of potentially useful information is still being discarded.
"The information that we derive from the brain measurements can be used to account for about 20 percent of the variance in intelligence we observed in our subjects," Dubois says. "We are doing very well, but we are still quite far from being able to match the results of hour-long intelligence tests, like the Wechsler Adult Intelligence Scale,"
Dubois also points out a sort of philosophical conundrum inherent in the work. "Since the algorithm is trained on intelligence scores to begin with, how do we know that the intelligence scores are correct?" The researchers addressed this issue by extracting a more precise estimate of intelligence across 10 different cognitive tasks that the subjects had taken, not only from an IQ test. ...
A distributed brain network predicts general intelligence from resting-state human neuroimaging data
Individual people differ in their ability to reason, solve problems, think abstractly, plan and learn. A reliable measure of this general ability, also known as intelligence, can be derived from scores across a diverse set of cognitive tasks. There is great interest in understanding the neural underpinnings of individual differences in intelligence, since it is the single best predictor of long-term life success, and since individual differences in a similar broad ability are found across animal species. The most replicated neural correlate of human intelligence to date is total brain volume. However, this coarse morphometric correlate gives no insights into mechanisms; it says little about function. Here we ask whether measurements of the activity of the resting brain (resting-state fMRI) might also carry information about intelligence. We used the final release of the Young Adult Human Connectome Project dataset (N=884 subjects after exclusions), providing a full hour of resting-state fMRI per subject; controlled for gender, age, and brain volume; and derived a reliable estimate of general intelligence from scores on multiple cognitive tasks. Using a cross-validated predictive framework, we predicted 20% of the variance in general intelligence in the sampled population from their resting-state fMRI data. Interestingly, no single anatomical structure or network was responsible or necessary for this prediction, which instead relied on redundant information distributed across the brain.
Saturday, January 27, 2018
Mathematical Theory of Deep Neural Networks (Princeton workshop)
This looks interesting. Deep Learning would benefit from a stronger theoretical understanding of why it works so well. I hope they put the talks online!
Mathematical Theory of Deep Neural Networks
Tuesday March 20th, Princeton Neuroscience Institute.
PNI Psychology Lecture Hall 101
Recent advances in deep networks, combined with open, easily-accessible implementations, have moved empirical results far faster than formal understanding. The lack of rigorous analysis for these techniques limits their use in addressing scientific questions in the physical and biological sciences, and prevents systematic design of the next generation of networks. Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems.
This intensive one-day technical workshop will focus on state of the art theoretical understanding of deep learning. We aim to bring together researchers from the Princeton Neuroscience Institute (PNI) and of the theoretical machine learning group at the Institute for Advanced Studies (IAS) interested in more rigorously understanding deep networks to foster increased discussion and collaboration across these intrinsically related groups.
Friday, January 19, 2018
Allen Institute meeting on Genetics of Complex Traits
You can probably tell by all the photos below that I love their new building :-)
I was a participant in this event: What Makes Us Human? The Genetics of Complex Traits (Allen Frontiers Group), including in a small second day workshop with just the speakers and the AI leadership. This workshop will, I hope, result in some interesting new initiatives in complex trait genomics!
I'd like to thank the Allen Institute organizers for making this such a pleasant and productive 2 days. I learned some incredible things from the other speakers and I recommend all of their talks -- available here.
My talk:
Action photos:
Working hard on day 2 in the little conference room :-)
I was a participant in this event: What Makes Us Human? The Genetics of Complex Traits (Allen Frontiers Group), including in a small second day workshop with just the speakers and the AI leadership. This workshop will, I hope, result in some interesting new initiatives in complex trait genomics!
I'd like to thank the Allen Institute organizers for making this such a pleasant and productive 2 days. I learned some incredible things from the other speakers and I recommend all of their talks -- available here.
My talk:
Action photos:
Working hard on day 2 in the little conference room :-)
Tuesday, January 16, 2018
What Makes Us Human? The Genetics of Complex Traits (Allen Frontiers Group)
I'll be attending this meeting in Seattle the next few days.
Recent research has led to new insights on how genes shape brain structure and development, and their impact on individual variation. Although significant inroads have been made in understanding the genetics underlying disease risk, what about the complex traits of extraordinary variation - such as cognition, superior memory, etc.? Can current advances shed light on genetic components underpinning these variations?Paul Allen (MSFT co-founder) is a major supporter of scientific research, including the Allen Institute for Brain Science. Excerpts from his memoir, Idea Man.
Personal genomics, biobank resources, emerging statistical genetics methods and neuroimaging capabilities are opening new frontiers in the field of complex trait analysis. This symposium will highlight experts using diverse approaches to explore a spectrum of individual variation of the human mind.
We are at a unique moment in bioscience. New ideas, combined with emerging technologies, will create unprecedented and transformational insights into living systems. Accelerating the pace of this change requires a thoughtful and agile exploration of the entire landscape of bioscience, across disciplines and spheres of research. Launched in 2016 with a $100 million commitment toward a larger 10-year plan, The Paul G. Allen Frontiers Group will discover and support scientific ideas that change the world. We are committed to a continuous conversation with the scientific community that allows us to remain at the ever-changing frontiers of science and reimagine what is possible.My talk is scheduled for 3:55 PM Pacific Weds 1/17. All talks will be streamed on the Allen Institute Facebook page.
Friday, December 08, 2017
Recursive Cortical Networks: data efficient computer vision
Will knowledge from neuroscience inform the design of better AIs (neural nets)? These results from startup Vicarious AI suggest that the answer is yes! (See also this company blog post describing the research.)
It has often been remarked that evolved biological systems (e.g., a baby) can learn much faster and using much less data than existing artificial neural nets. Significant improvements in AI are almost certainly within reach...
Thanks to reader and former UO Physics colleague Raghuveer Parthasarathy for a pointer to this paper!
A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs
Science 08 Dec 2017: Vol. 358, Issue 6368, eaag2612
DOI: 10.1126/science.aag2612
INTRODUCTION
Compositionality, generalization, and learning from a few examples are among the hallmarks of human intelligence. CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), images used by websites to block automated interactions, are examples of problems that are easy for people but difficult for computers. CAPTCHAs add clutter and crowd letters together to create a chicken-and-egg problem for algorithmic classifiers—the classifiers work well for characters that have been segmented out, but segmenting requires an understanding of the characters, which may be rendered in a combinatorial number of ways. CAPTCHAs also demonstrate human data efficiency: A recent deep-learning approach for parsing one specific CAPTCHA style required millions of labeled examples, whereas humans solve new styles without explicit training.
By drawing inspiration from systems neuroscience, we introduce recursive cortical network (RCN), a probabilistic generative model for vision in which message-passing–based inference handles recognition, segmentation, and reasoning in a unified manner. RCN learns with very little training data and fundamentally breaks the defense of modern text-based CAPTCHAs by generatively segmenting characters. In addition, RCN outperforms deep neural networks on a variety of benchmarks while being orders of magnitude more data-efficient.
RATIONALE
Modern deep neural networks resemble the feed-forward hierarchy of simple and complex cells in the neocortex. Neuroscience has postulated computational roles for lateral and feedback connections, segregated contour and surface representations, and border-ownership coding observed in the visual cortex, yet these features are not commonly used by deep neural nets. We hypothesized that systematically incorporating these findings into a new model could lead to higher data efficiency and generalization. Structured probabilistic models provide a natural framework for incorporating prior knowledge, and belief propagation (BP) is an inference algorithm that can match the cortical computational speed. The representational choices in RCN were determined by investigating the computational underpinnings of neuroscience data under the constraint that accurate inference should be possible using BP.
RESULTS
RCN was effective in breaking a wide variety of CAPTCHAs with very little training data and without using CAPTCHA-specific heuristics. By comparison, a convolutional neural network required a 50,000-fold larger training set and was less robust to perturbations to the input. Similar results are shown on one- and few-shot MNIST (modified National Institute of Standards and Technology handwritten digit data set) classification, where RCN was significantly more robust to clutter introduced during testing. As a generative model, RCN outperformed neural network models when tested on noisy and cluttered examples and generated realistic samples from one-shot training of handwritten characters. RCN also proved to be effective at an occlusion reasoning task that required identifying the precise relationships between characters at multiple points of overlap. On a standard benchmark for parsing text in natural scenes, RCN outperformed state-of-the-art deep-learning methods while requiring 300-fold less training data.
CONCLUSION
Our work demonstrates that structured probabilistic models that incorporate inductive biases from neuroscience can lead to robust, generalizable machine learning models that learn with high data efficiency. In addition, our model’s effectiveness in breaking text-based CAPTCHAs with very little training data suggests that websites should seek more robust mechanisms for detecting automated interactions.
Thursday, October 26, 2017
The Physicist and the Neuroscientist: A Tale of Two Connectomes
This is video of an excellent talk on the human connectome by neuroscientist Bobby Kasthuri of Argonne National Lab and the University of Chicago. (You can see me sitting on the floor in the corner :-)
The story below is for entertainment purposes only. No triggering of biologists is intended.
The Physicist and the Neuroscientist: A Tale of Two ConnectomesMore Bobby, with more hair.
Steve burst into Bobby's lab, a small metal box under one arm. Startled, Bobby nearly knocked over his Zeiss electron microscope.
I've got it! shouted Steve. My former student at DeepBrain sent me one of their first AGI's. It's hot out of their 3D neuromorphic chip printer.
This is the thing that talks and understands quantum mechanics? asked Bobby.
Yes, if I just plug it in. He tapped the box -- This deep net has 10^10 connections! Within spitting distance of our brains, but much more efficient. They trained it in their virtual simulator world. Some of the algos are based on my polytope paper from last year. It not only knows QM, it understands what you mean by "How much is that doggie in the window?" :-)
Has anyone mapped the connections?
Sort of, I mean the strengths and topology are determined by the training and algos... It was all done virtually. Printed into spaghetti in this box.
We've got to scan it right away! My new rig can measure 10^5 connections per second!
What for? It's silicon spaghetti. It works how it works, but we created it! Specific connections... that's like collecting postage stamps.
No, but we need to UNDERSTAND HOW IT WORKS!
...
Why don't you just ask IT? thought Steve, as he left Bobby's lab.
Sunday, August 20, 2017
Ninety-nine genetic loci influencing general cognitive function
The paper below has something like 200 authors from over 100 institutions worldwide.
Many people claimed just a few years ago (or more recently!) that results like this were impossible. Will they admit their mistake?
In Scientific Consensus on Cognitive Ability? I described the current consensus among experts as follows.
Many people claimed just a few years ago (or more recently!) that results like this were impossible. Will they admit their mistake?
In Scientific Consensus on Cognitive Ability? I described the current consensus among experts as follows.
0. Intelligence is (at least crudely) measurableSee figures below for a summary of progress over the last six years. Note 4% of total variance = 1/25 and sqrt(1/25) = 1/5, so a predictor built from these variants would correlate ~0.2 with actual cognitive ability. There is still much more variance to be discovered with larger samples, of course.
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size
Ninety-nine independent genetic loci influencing general cognitive function include genes associated with brain health and structure (N = 280,360)
General cognitive function is a prominent human trait associated with many important life outcomes including longevity. The substantial heritability of general cognitive function is known to be polygenic, but it has had little explication in terms of the contributing genetic variants. Here, we combined cognitive and genetic data from the CHARGE and COGENT consortia, and UK Biobank (total N=280,360). We found 9,714 genome-wide significant SNPs in 99 independent loci. Most showed clear evidence of functional importance. Among many novel genes associated with general cognitive function were SGCZ, ATXN1, MAPT, AUTS2, and P2RY6. Within the novel genetic loci were variants associated with neurodegenerative disorders, neurodevelopmental disorders, physical and psychiatric illnesses, brain structure, and BMI. Gene-based analyses found 536 genes significantly associated with general cognitive function; many were highly expressed in the brain, and associated with neurogenesis and dendrite gene sets. Genetic association results predicted up to 4% of general cognitive function variance in independent samples. There was significant genetic overlap between general cognitive function and information processing speed, as well as many health variables including longevity.
Thursday, August 10, 2017
Meanwhile, down on the Farm
Note Added in response to 2020 Twitter mob attack which attempts to misrepresent my views: This blog post discusses the firing of James Damore by Google. It was a sensation at the time in Silicon Valley and made national news. This post is primarily about the scientific content of Damore's memo. Initial media reports describing his memo were very misleading and few people made the effort to read what Damore actually wrote before attacking him. I happened to notice that the Stanford Medical School magazine had (by coincidence) just featured an article on some of the issues discussed by Damore. Whether (below) the Stanford neuroscientist Nirao Shah or the former President of the American Psychological Association Diane Halpern are correct or not about the science, it seems unfair to call Damore a crank if he is simply referencing (in good faith) results in the published scientific literature. The same kinds of results are presented in the article below, written for the alumni of Stanford Medical School.
In the second part of the post below I describe some recent survey results on individual preferences among mathematically gifted men and women who are part of a ~50 year longitudinal study -- they have been studied since childhood. I note specifically that differences in preferences between men and women are not necessarily biological in origin (we simply don't know): they could be the result of sexism in child rearing, schooling, postdoc training, etc.
However, the point is that the survey results are likely descriptive of how actual adult men and women think and feel, and may have implications for labor markets. This is NOT a discussion about ability differences between men and women (all the individuals in the study are mathematically gifted), but rather about preferences concerning life fulfillment, lifestyle, work-life balance, etc. And again, no causation is assumed -- the situation may be entirely due to sexism in society, with zero biological basis.
The Spring 2017 issue of the Stanford Medical School magazine has a special theme: Sex, Gender, and Medicine. I recommend the article excerpted below to journalists covering the Google Manifesto / James Damore firing. After reading it, they can decide for themselves whether his memo is based on established neuroscience or bro-pseudoscience.
Perhaps top Google executives will want to head down the road to Stanford for a refresher course in reality.
Stanford Neuroscience Professor Nirao Shah and Diane Halpern, past president of the American Psychological Association, would both make excellent expert witnesses in the Trial of the Century.
See also Gender differences in preferences, choices, and outcomes: SMPY longitudinal study. These preference asymmetries are not necessarily determined by biology. They could be entirely due to societal influences. But nevertheless, they characterize the pool of human capital from which Google is trying to hire.
A 6ft3 Asian-American guard (Jeremy Lin) might be just as good as other guards in the NBA, but the fraction of Asian-American males who are 6ft3 is smaller than for other groups, like African-Americans. Even if there were no discrimination against Asian players, you'd expect to see fewer (relative to base population) in the NBA due to the average height difference.
In the second part of the post below I describe some recent survey results on individual preferences among mathematically gifted men and women who are part of a ~50 year longitudinal study -- they have been studied since childhood. I note specifically that differences in preferences between men and women are not necessarily biological in origin (we simply don't know): they could be the result of sexism in child rearing, schooling, postdoc training, etc.
However, the point is that the survey results are likely descriptive of how actual adult men and women think and feel, and may have implications for labor markets. This is NOT a discussion about ability differences between men and women (all the individuals in the study are mathematically gifted), but rather about preferences concerning life fulfillment, lifestyle, work-life balance, etc. And again, no causation is assumed -- the situation may be entirely due to sexism in society, with zero biological basis.
The Spring 2017 issue of the Stanford Medical School magazine has a special theme: Sex, Gender, and Medicine. I recommend the article excerpted below to journalists covering the Google Manifesto / James Damore firing. After reading it, they can decide for themselves whether his memo is based on established neuroscience or bro-pseudoscience.
Perhaps top Google executives will want to head down the road to Stanford for a refresher course in reality.
Stanford Neuroscience Professor Nirao Shah and Diane Halpern, past president of the American Psychological Association, would both make excellent expert witnesses in the Trial of the Century.
Two minds: The cognitive differences between men and women
... Nirao Shah decided in 1998 to study sex-based differences in the brain ... “I wanted to find and explore neural circuits that regulate specific behaviors,” says Shah, then a newly minted Caltech PhD who was beginning a postdoctoral fellowship at Columbia. So, he zeroed in on sex-associated behavioral differences in mating, parenting and aggression.
“These behaviors are essential for survival and propagation,” says Shah, MD, PhD, now a Stanford professor of psychiatry and behavioral sciences and of neurobiology. “They’re innate rather than learned — at least in animals — so the circuitry involved ought to be developmentally hard-wired into the brain. These circuits should differ depending on which sex you’re looking at.”
His plan was to learn what he could about the activity of genes tied to behaviors that differ between the sexes, then use that knowledge to help identify the neuronal circuits — clusters of nerve cells in close communication with one another — underlying those behaviors.
At the time, this was not a universally popular idea. The neuroscience community had largely considered any observed sex-associated differences in cognition and behavior in humans to be due to the effects of cultural influences. Animal researchers, for their part, seldom even bothered to use female rodents in their experiments, figuring that the cyclical variations in their reproductive hormones would introduce confounding variability into the search for fundamental neurological insights.
But over the past 15 years or so, there’s been a sea change as new technologies have generated a growing pile of evidence that there are inherent differences in how men’s and women’s brains are wired and how they work.
... There was too much data pointing to the biological basis of sex-based cognitive differences to ignore, Halpern says. For one thing, the animal-research findings resonated with sex-based differences ascribed to people. These findings continue to accrue. In a study of 34 rhesus monkeys, for example, males strongly preferred toys with wheels over plush toys, whereas females found plush toys likable. It would be tough to argue that the monkeys’ parents bought them sex-typed toys or that simian society encourages its male offspring to play more with trucks. A much more recent study established that boys and girls 9 to 17 months old — an age when children show few if any signs of recognizing either their own or other children’s sex — nonetheless show marked differences in their preference for stereotypically male versus stereotypically female toys.
Halpern and others have cataloged plenty of human behavioral differences. “These findings have all been replicated,” she says.
... “You see sex differences in spatial-visualization ability in 2- and 3-month-old infants,” Halpern says. Infant girls respond more readily to faces and begin talking earlier. Boys react earlier in infancy to experimentally induced perceptual discrepancies in their visual environment. In adulthood, women remain more oriented to faces, men to things.
All these measured differences are averages derived from pooling widely varying individual results. While statistically significant, the differences tend not to be gigantic. They are most noticeable at the extremes of a bell curve, rather than in the middle, where most people cluster. ...
See also Gender differences in preferences, choices, and outcomes: SMPY longitudinal study. These preference asymmetries are not necessarily determined by biology. They could be entirely due to societal influences. But nevertheless, they characterize the pool of human capital from which Google is trying to hire.
The recent SMPY paper below describes a group of mathematically gifted (top 1% ability) individuals who have been followed for 40 years. This is precisely the pool from which one would hope to draw STEM and technological leadership talent. There are 1037 men and 613 women in the study.For example, if a typical male SV entrepreneur / tech leader is roughly +2SD on these traits whereas a female is +2.5SD, the population fraction would be 3:1 or 4:1 larger for males. This doesn't mean that the females who are > +2.5SD (in the female population) are ill-suited to the role (they may be as good as the men), just that there are fewer of them in the general population. I was shocked to see that even top Google leadership didn't understand this point that Damore tried to make in his memo.
The figures show significant gender differences in life and career preferences, which affect choices and outcomes even after ability is controlled for. (Click for larger versions.) According to the results, SMPY men are more concerned with money, prestige, success, creating or inventing something with impact, etc. SMPY women prefer time and work flexibility, want to give back to the community, and are less comfortable advocating unpopular ideas. Some of these asymmetries are at the 0.5 SD level or greater. Here are three survey items with a ~ 0.4 SD or more asymmetry:
# Society should invest in my ideas because they are more important than those of other people.
# Discomforting others does not deter me from stating the facts.
# Receiving criticism from others does not inhibit me from expressing my thoughts.
I would guess that Silicon Valley entrepreneurs and leading technologists are typically about +2 SD on each of these items! One can directly estimate M/F ratios from these parameters ...
A 6ft3 Asian-American guard (Jeremy Lin) might be just as good as other guards in the NBA, but the fraction of Asian-American males who are 6ft3 is smaller than for other groups, like African-Americans. Even if there were no discrimination against Asian players, you'd expect to see fewer (relative to base population) in the NBA due to the average height difference.
Sunday, July 30, 2017
Like little monkeys: How the brain does face recognition
This is a Caltech TEDx talk from 2013, in which Doris Tsao discusses her work on the neuroscience of human face recognition. Recently I blogged about her breakthrough in identifying the face recognition algorithm used by monkey (and presumably human) brains. The algorithm seems similar to those used in machine face recognition: individual neurons perform feature detection just as in neural nets. This is not surprising from a purely information-theoretic perspective, if we just think about the space of facial variation and the optimal encoding. But it is amazing to be able to demonstrate it by monitoring specific neurons in a monkey brain.
An earlier research claim (which, four years ago, she recapitulates @8:50min in the video), that certain neurons are sensitive only to specific faces, seems not to be true. I always found it implausible.
On her faculty web page Tsao talks about her decision to attend Caltech as an undergraduate:
One day, my father went on a trip to California and took a tour of Caltech with a friend. He came back and told me about a monastery for science, located under the mountains amidst flowers and orange trees, where all the students looked very skinny and super smart, like little monkeys. I was intrigued. I went to a presentation about Caltech by a visiting admissions officer, who showed slides of students taking tests under olive trees, swimming in the Pacific, huddled in a dorm room working on a problem set... I decided: this is where I want to go to college! I dreamed every day about being accepted to Caltech. After I got my acceptance letter, I began to worry that I would fall behind in the first year, since I had heard about how hard the course load is. So I went to the library and started reading the Feynman Lectures. This was another world…where one could see beneath the surface of things, ask why, why, why, why? And the results of one’s mental deliberations actually could be tested by experiments and reveal completely unexpected yet real phenomena, like magnetism as a consequence of the invariance of the speed of light.See also Feynman Lectures: Epilogue and Where Men are Men, and Giants Walk the Earth.
Thursday, June 29, 2017
How the brain does face recognition
This is a beautiful result. IIUC, these neuroscientists use the terminology "face axis" instead of (machine learning terminology) variation along an eigenface vector or feature vector.
Scientific American: ...using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at Caltech have finally cracked the neural code for face recognition. The researchers found the firing rate of each face cell corresponds to separate facial features along an axis. Like a set of dials, the cells are fine-tuned to bits of information, which they can then channel together in different combinations to create an image of every possible face. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees, by simply tracking the electrical activity of its face cells.”I never believed the "Jennifer Aniston neuron" results, which seemed implausible from a neural architecture perspective. I thought the encoding had to be far more complex and modular. Apparently that's the case. The single neuron claim has been widely propagated (for over a decade!) but now seems to be yet another result that fails to replicate after invading the meme space of credulous minds.
... neuroscientist Rodrigo Quian Quiroga found that pictures of actress Jennifer Aniston elicited a response in a single neuron. And pictures of Halle Berry, members of The Beatles or characters from The Simpsons activated separate neurons. The prevailing theory among researchers was that each neuron in the face patches was sensitive to a few particular people, says Quiroga, who is now at the University of Leicester in the U.K. and not involved with the work. But Tsao’s recent study suggests scientists may have been mistaken. “She has shown that neurons in face patches don’t encode particular people at all, they just encode certain features,” he says. “That completely changes our understanding of how we recognize faces.”Modular feature sensitivity -- just like in neural net face recognition:
... To decipher how individual cells helped recognize faces, Tsao and her postdoc Steven Le Chang drew dots around a set of faces and calculated variations across 50 different characteristics. They then used this information to create 2,000 different images of faces that varied in shape and appearance, including roundness of the face, distance between the eyes, skin tone and texture. Next the researchers showed these images to monkeys while recording the electrical activity from individual neurons in three separate face patches.This is the original paper in Cell:
All that mattered for each neuron was a single-feature axis. Even when viewing different faces, a neuron that was sensitive to hairline width, for example, would respond to variations in that feature. But if the faces had the same hairline and different-size noses, the hairline neuron would stay silent, Chang says. The findings explained a long-disputed issue in the previously held theory of why individual neurons seemed to recognize completely different people.
Moreover, the neurons in different face patches processed complementary information. Cells in one face patch—the anterior medial patch—processed information about the appearance of faces such as distances between facial features like the eyes or hairline. Cells in other patches—the middle lateral and middle fundus areas—handled information about shapes such as the contours of the eyes or lips. Like workers in a factory, the various face patches did distinct jobs, cooperating, communicating and building on one another to provide a complete picture of facial identity.
Once Chang and Tsao knew how the division of labor occurred among the “factory workers,” they could predict the neurons’ responses to a completely new face. The two developed a model for which feature axes were encoded by various neurons. Then they showed monkeys a new photo of a human face. Using their model of how various neurons would respond, the researchers were able to re-create the face that a monkey was viewing. “The re-creations were stunningly accurate,” Tsao says. In fact, they were nearly indistinguishable from the actual photos shown to the monkeys.
The Code for Facial Identity in the Primate Brain200 cells is interesting because (IIRC) standard deep learning face recognition packages right now use a 126-dimensional feature space. These packages perform roughly as well as humans (or perhaps a bit better?).
Le Chang, Doris Y. Tsao
Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells
•Face cells display flat tuning along dimensions orthogonal to the axis being coded
•The axis model is more efficient, robust, and flexible than the exemplar model
•Face patches ML/MF and AM carry complementary information about faces
Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
Subscribe to:
Posts (Atom)
Blog Archive
Labels
- physics (420)
- genetics (325)
- globalization (301)
- genomics (295)
- technology (282)
- brainpower (280)
- finance (275)
- american society (261)
- China (249)
- innovation (231)
- ai (206)
- economics (202)
- psychometrics (190)
- science (172)
- psychology (169)
- machine learning (166)
- biology (163)
- photos (162)
- genetic engineering (150)
- universities (150)
- travel (144)
- podcasts (143)
- higher education (141)
- startups (139)
- human capital (127)
- geopolitics (124)
- credit crisis (115)
- political correctness (108)
- iq (107)
- quantum mechanics (107)
- cognitive science (103)
- autobiographical (97)
- politics (93)
- careers (90)
- bounded rationality (88)
- social science (86)
- history of science (85)
- realpolitik (85)
- statistics (83)
- elitism (81)
- talks (80)
- evolution (79)
- credit crunch (78)
- biotech (76)
- genius (76)
- gilded age (73)
- income inequality (73)
- caltech (68)
- books (64)
- academia (62)
- history (61)
- intellectual history (61)
- MSU (60)
- sci fi (60)
- harvard (58)
- silicon valley (58)
- mma (57)
- mathematics (55)
- education (53)
- video (52)
- kids (51)
- bgi (48)
- black holes (48)
- cdo (45)
- derivatives (43)
- neuroscience (43)
- affirmative action (42)
- behavioral economics (42)
- economic history (42)
- literature (42)
- nuclear weapons (42)
- computing (41)
- jiujitsu (41)
- physical training (40)
- film (39)
- many worlds (39)
- quantum field theory (39)
- expert prediction (37)
- ufc (37)
- bjj (36)
- bubbles (36)
- mortgages (36)
- google (35)
- race relations (35)
- hedge funds (34)
- security (34)
- von Neumann (34)
- meritocracy (31)
- feynman (30)
- quants (30)
- taiwan (30)
- efficient markets (29)
- foo camp (29)
- movies (29)
- sports (29)
- music (28)
- singularity (27)
- entrepreneurs (26)
- conferences (25)
- housing (25)
- obama (25)
- subprime (25)
- venture capital (25)
- berkeley (24)
- epidemics (24)
- war (24)
- wall street (23)
- athletics (22)
- russia (22)
- ultimate fighting (22)
- cds (20)
- internet (20)
- new yorker (20)
- blogging (19)
- japan (19)
- scifoo (19)
- christmas (18)
- dna (18)
- gender (18)
- goldman sachs (18)
- university of oregon (18)
- cold war (17)
- cryptography (17)
- freeman dyson (17)
- smpy (17)
- treasury bailout (17)
- algorithms (16)
- autism (16)
- personality (16)
- privacy (16)
- Fermi problems (15)
- cosmology (15)
- happiness (15)
- height (15)
- india (15)
- oppenheimer (15)
- probability (15)
- social networks (15)
- wwii (15)
- fitness (14)
- government (14)
- les grandes ecoles (14)
- neanderthals (14)
- quantum computers (14)
- blade runner (13)
- chess (13)
- hedonic treadmill (13)
- nsa (13)
- philosophy of mind (13)
- research (13)
- aspergers (12)
- climate change (12)
- harvard society of fellows (12)
- malcolm gladwell (12)
- net worth (12)
- nobel prize (12)
- pseudoscience (12)
- Einstein (11)
- art (11)
- democracy (11)
- entropy (11)
- geeks (11)
- string theory (11)
- television (11)
- Go (10)
- ability (10)
- complexity (10)
- dating (10)
- energy (10)
- football (10)
- france (10)
- italy (10)
- mutants (10)
- nerds (10)
- olympics (10)
- pop culture (10)
- crossfit (9)
- encryption (9)
- eugene (9)
- flynn effect (9)
- james salter (9)
- simulation (9)
- tail risk (9)
- turing test (9)
- alan turing (8)
- alpha (8)
- ashkenazim (8)
- data mining (8)
- determinism (8)
- environmentalism (8)
- games (8)
- keynes (8)
- manhattan (8)
- new york times (8)
- pca (8)
- philip k. dick (8)
- qcd (8)
- real estate (8)
- robot genius (8)
- success (8)
- usain bolt (8)
- Iran (7)
- aig (7)
- basketball (7)
- free will (7)
- fx (7)
- game theory (7)
- hugh everett (7)
- inequality (7)
- information theory (7)
- iraq war (7)
- markets (7)
- paris (7)
- patents (7)
- poker (7)
- teaching (7)
- vietnam war (7)
- volatility (7)
- anthropic principle (6)
- bayes (6)
- class (6)
- drones (6)
- econtalk (6)
- empire (6)
- global warming (6)
- godel (6)
- intellectual property (6)
- nassim taleb (6)
- noam chomsky (6)
- prostitution (6)
- rationality (6)
- academia sinica (5)
- bobby fischer (5)
- demographics (5)
- fake alpha (5)
- kasparov (5)
- luck (5)
- nonlinearity (5)
- perimeter institute (5)
- renaissance technologies (5)
- sad but true (5)
- software development (5)
- solar energy (5)
- warren buffet (5)
- 100m (4)
- Poincare (4)
- assortative mating (4)
- bill gates (4)
- borges (4)
- cambridge uk (4)
- censorship (4)
- charles darwin (4)
- computers (4)
- creativity (4)
- hormones (4)
- humor (4)
- judo (4)
- kerviel (4)
- microsoft (4)
- mixed martial arts (4)
- monsters (4)
- moore's law (4)
- soros (4)
- supercomputers (4)
- trento (4)
- 200m (3)
- babies (3)
- brain drain (3)
- charlie munger (3)
- cheng ting hsu (3)
- chet baker (3)
- correlation (3)
- ecosystems (3)
- equity risk premium (3)
- facebook (3)
- fannie (3)
- feminism (3)
- fst (3)
- intellectual ventures (3)
- jim simons (3)
- language (3)
- lee kwan yew (3)
- lewontin fallacy (3)
- lhc (3)
- magic (3)
- michael lewis (3)
- mit (3)
- nathan myhrvold (3)
- neal stephenson (3)
- olympiads (3)
- path integrals (3)
- risk preference (3)
- search (3)
- sec (3)
- sivs (3)
- society generale (3)
- systemic risk (3)
- thailand (3)
- twitter (3)
- alibaba (2)
- bear stearns (2)
- bruce springsteen (2)
- charles babbage (2)
- cloning (2)
- david mamet (2)
- digital books (2)
- donald mackenzie (2)
- drugs (2)
- dune (2)
- exchange rates (2)
- frauds (2)
- freddie (2)
- gaussian copula (2)
- heinlein (2)
- industrial revolution (2)
- james watson (2)
- ltcm (2)
- mating (2)
- mba (2)
- mccain (2)
- monkeys (2)
- national character (2)
- nicholas metropolis (2)
- no holds barred (2)
- offices (2)
- oligarchs (2)
- palin (2)
- population structure (2)
- prisoner's dilemma (2)
- singapore (2)
- skidelsky (2)
- socgen (2)
- sprints (2)
- star wars (2)
- ussr (2)
- variance (2)
- virtual reality (2)
- war nerd (2)
- abx (1)
- anathem (1)
- andrew lo (1)
- antikythera mechanism (1)
- athens (1)
- atlas shrugged (1)
- ayn rand (1)
- bay area (1)
- beats (1)
- book search (1)
- bunnie huang (1)
- car dealers (1)
- carlos slim (1)
- catastrophe bonds (1)
- cdos (1)
- ces 2008 (1)
- chance (1)
- children (1)
- cochran-harpending (1)
- cpi (1)
- david x. li (1)
- dick cavett (1)
- dolomites (1)
- eharmony (1)
- eliot spitzer (1)
- escorts (1)
- faces (1)
- fads (1)
- favorite posts (1)
- fiber optic cable (1)
- francis crick (1)
- gary brecher (1)
- gizmos (1)
- greece (1)
- greenspan (1)
- hypocrisy (1)
- igon value (1)
- iit (1)
- inflation (1)
- information asymmetry (1)
- iphone (1)
- jack kerouac (1)
- jaynes (1)
- jazz (1)
- jfk (1)
- john dolan (1)
- john kerry (1)
- john paulson (1)
- john searle (1)
- john tierney (1)
- jonathan littell (1)
- las vegas (1)
- lawyers (1)
- lehman auction (1)
- les bienveillantes (1)
- lowell wood (1)
- lse (1)
- machine (1)
- mcgeorge bundy (1)
- mexico (1)
- michael jackson (1)
- mickey rourke (1)
- migration (1)
- money:tech (1)
- myron scholes (1)
- netwon institute (1)
- networks (1)
- newton institute (1)
- nfl (1)
- oliver stone (1)
- phil gramm (1)
- philanthropy (1)
- philip greenspun (1)
- portfolio theory (1)
- power laws (1)
- pyschology (1)
- randomness (1)
- recession (1)
- sales (1)
- skype (1)
- standard deviation (1)
- starship troopers (1)
- students today (1)
- teleportation (1)
- tierney lab blog (1)
- tomonaga (1)
- tyler cowen (1)
- venice (1)
- violence (1)
- virtual meetings (1)
- wealth effect (1)


















