Wednesday, September 30, 2015

Disruptive mutations and the genetic architecture of autism


New results on the genetic architecture of autism support Mike Wigler's Unified Theory. See earlier post De Novo Mutations and Autism. Recent increases in the incidence of autism could be mainly due to greater diagnostic awareness. However, the new result that women can be carriers of autism-linked variants without exhibiting the same kinds of symptoms as men might alter the usual analysis of the role of assortative mating. Perhaps women who are carriers are predisposed to marry nerdy (but mostly asymptomatic) males who also carry above average mutational load in autism genes?

I suspect many of the ~200 genes identified in this study will overlap with the ~80 SNPs recently found by SSGAC to be associated with cognitive ability. The principle of continuity suggests that in addition to ultra-rare variants with "devastating" effects, there are many moderately rare variants (also under negative, but weaker, selection due to smaller effect size) affecting the same pathways. These would contribute to variance in cognitive ability within the normal population. More discussion in section 3 of On the Genetic Architecture of Intelligence.
Neuroscience News: Quantitative study identifies 239 genes whose ‘vulnerability’ to devastating de novo mutation makes them priority research targets.

... devastating “ultra-rare” mutations of genes that they classify as “vulnerable” play a causal role in roughly half of all ASD cases. The vulnerable genes to which they refer harbor what they call an LGD, or likely gene-disruption. These LGD mutations can occur “spontaneously” between generations, and when that happens they are found in the affected child but not found in either parent.

Although LGDs can impair the function of key genes, and in this way have a deleterious impact on health, this is not always the case. The study, whose first author is the quantitative biologist Ivan Iossifov, a CSHL assistant professor and on faculty at the New York Genome Center, finds that “autism genes” – i.e., those that, when mutated, may contribute to an ASD diagnosis – tend to have fewer mutations than most genes in the human gene pool.

This seems paradoxical, but only on the surface. Iossifov explains that genes with devastating de novo LGD mutations, when they occur in a child and give rise to autism, usually don’t remain in the gene pool for more than one generation before they are, in evolutionary terms, purged. This is because those born with severe autism rarely reproduce.

The team’s data helps the research community prioritize which genes with LGDs are most likely to play a causal role in ASD. The team pares down a list of about 500 likely causal genes to slightly more than 200 best “candidate” autism genes.

The current study also sheds new light on the transmission to children of LGDs that are carried by parents who harbor them but whose health is nevertheless not severely affected. Such transmission events were observed and documented in the families used in the study, comprising the Simons Simplex Collection (SSC). When parents carry potentially devastating LGD mutations, these are more frequently found in the ASD-affected children than in their unaffected children, and most often come from the mother.

This result supports a theory first published in 2007 by senior author Michael Wigler, a CSHL professor, and Dr. Kenny Ye, a statistician at Albert Einstein College of Medicine. They predicted that unaffected mothers are “carriers” of devastating mutations that are preferentially transmitted to children affected with severe ASD. Females have an as yet unexplained factor that protects them from mutations which, when they occur in males, will be significantly more likely to cause ASD. It is well known that at least four times as many males as females have ASD.

Wigler’s 2007 “unified theory” of sporadic autism causation predicted precisely this effect. “Devastating de novo mutations in autism genes should be under strong negative selection pressure,” he explains. “And that is among the findings of the paper we’re publishing today. Our analysis also revealed that a surprising proportion of rare devastating mutations transmitted by parents occurs in genes expressed in the embryonic brain.” This finding tends to support theories suggesting that at least some of the gene mutations with the power to cause ASD occur in genes that are indispensable for normal brain development.
Here is the paper at PNAS:
Low load for disruptive mutations in autism genes and their biased transmission

We previously computed that genes with de novo (DN) likely gene-disruptive (LGD) mutations in children with autism spectrum disorders (ASD) have high vulnerability: disruptive mutations in many of these genes, the vulnerable autism genes, will have a high likelihood of resulting in ASD. Because individuals with ASD have lower fecundity, such mutations in autism genes would be under strong negative selection pressure. An immediate prediction is that these genes will have a lower LGD load than typical genes in the human gene pool. We confirm this hypothesis in an explicit test by measuring the load of disruptive mutations in whole-exome sequence databases from two cohorts. We use information about mutational load to show that lower and higher intelligence quotients (IQ) affected individuals can be distinguished by the mutational load in their respective gene targets, as well as to help prioritize gene targets by their likelihood of being autism genes. Moreover, we demonstrate that transmission of rare disruptions in genes with a lower LGD load occurs more often to affected offspring; we show transmission originates most often from the mother, and transmission of such variants is seen more often in offspring with lower IQ. A surprising proportion of transmission of these rare events comes from genes expressed in the embryonic brain that show sharply reduced expression shortly after birth.

Saturday, September 26, 2015

Expert Prediction: hard and soft

Jason Zweig writes about Philip Tetlock's Good Judgement Project below. See also Expert Predictions, Perils of Prediction, and this podcast talk by Tetlock.

A quick summary: good amateurs (i.e., smart people who think probabilistically and are well read) typically perform as well as or better than area experts (e.g., PhDs in Social Science, History, Government; MBAs) when it comes to predicting real world outcomes. The marginal returns (in predictive power) to special "expertise" in soft subjects are small. (Most of the returns are in the form of credentialing or signaling ;-)
WSJ: ... I think Philip Tetlock’s “Superforecasting: The Art and Science of Prediction,” co-written with the journalist Dan Gardner, is the most important book on decision making since Daniel Kahneman’s “Thinking, Fast and Slow.” (I helped write and edit the Kahneman book but receive no royalties from it.) Prof. Kahneman agrees. “It’s a manual to systematic thinking in the real world,” he told me. “This book shows that under the right conditions regular people are capable of improving their judgment enough to beat the professionals at their own game.”

The book is so powerful because Prof. Tetlock, a psychologist and professor of management at the University of Pennsylvania’s Wharton School, has a remarkable trove of data. He has just concluded the first stage of what he calls the Good Judgment Project, which pitted some 20,000 amateur forecasters against some of the most knowledgeable experts in the world.

The amateurs won — hands down. Their forecasts were more accurate more often, and the confidence they had in their forecasts — as measured by the odds they set on being right — was more accurately tuned.

The top 2%, whom Prof. Tetlock dubs “superforecasters,” have above-average — but rarely genius-level — intelligence. Many are mathematicians, scientists or software engineers; but among the others are a pharmacist, a Pilates instructor, a caseworker for the Pennsylvania state welfare department and a Canadian underwater-hockey coach.

The forecasters competed online against four other teams and against government intelligence experts to answer nearly 500 questions over the course of four years: Will the president of Tunisia go into exile in the next month? Will the gold price exceed $1,850 on Sept. 30, 2011? Will OPEC agree to cut its oil output at or before its November 2014 meeting?

It turned out that, after rigorous statistical controls, the elite amateurs were on average about 30% more accurate than the experts with access to classified information. What’s more, the full pool of amateurs also outperformed the experts. ...
In technical subjects, such as chemistry or physics or mathematics, experts vastly outperform lay people even on questions related to everyday natural phenomena (let alone specialized topics). See, e.g., examples in Thinking Physics or Physics for Future Presidents. Because these fields have access to deep and challenging questions with demonstrably correct answers, the ability to answer these questions (a combination of cognitive ability and knowledge) is an obviously real and useful construct. See earlier post The Differences are Enormous:
Luis Alvarez laid it out bluntly:
The world of mathematics and theoretical physics is hierarchical. That was my first exposure to it. There's a limit beyond which one cannot progress. The differences between the limiting abilities of those on successively higher steps of the pyramid are enormous.
... People who work in "soft" fields (even in science) don't seem to understand this stark reality. I believe it is because their fields do not have ready access to right and wrong answers to deep questions. When those are available, huge differences in cognitive power are undeniable, as is the utility of this power.
Thought experiment for physicists: imagine a professor throwing copies of Jackson's Classical Electrodynamics at a group of students with the order, "Work out the last problem in each chapter and hand in your solutions to me on Monday!" I suspect that this exercise produces a highly useful rank ordering within the group, with huge differences in number of correct solutions.

Friday, September 25, 2015

Largest repositories of genomic data

This list of the largest repositories of genetic data appeared in the 25 September 2015 issue of Science. Note that the quality and extent of phenotyping varies significantly.
23andME

SIZE: >1 million GENETIC DATA: SNPs

This popular personal genomics company now hopes to apply its data to drug discovery (see main story, p. 1472).

ANCESTRY.COM

SIZE: >1 million GENETIC DATA: SNPs

This genealogy firm now has a collaboration with the Google-funded biotech Calico to look for longevity genes.

HUMAN LONGEVITY, INC.

SIZE: 1 million planned GENETIC DATA: whole genomes

Founded by genome pioneer Craig Venter, this company plans to sequence 100,000 people a year to look for aging-related genes.

100K WELLNESS PROJECT

SIZE: 107 (100,000 planned) GENETIC DATA: whole genomes

Led by another sequencing leader, Leroy Hood, this project is taking a systems approach to genetics and health.

MILLION VETERAN PROGRAM

SIZE: 390,000 (1 million planned) GENETIC DATA: SNPs, exomes, whole genomes

This U.S. Department of Defense–funded effort is probing the genetics of kidney and heart disease and substance abuse.

U.S. NATIONAL RESEARCH COHORT

SIZE: 1 million planned GENETIC DATA: to be determined

Part of President Obama's Precision Medicine Initiative, this project will use genetics to tailor health care to individuals.

UK BIOBANK

SIZE: 500,000 GENETIC DATA: SNPs

Study of middle-aged British is probing links between lifestyle, genes, and common diseases.

100,000 GENOMES PROJECT

SIZE: 5500 (75,000 normal + 25,000 tumor genes planned) GENETIC DATA: whole genomes

This U.K.-funded project focusing on cancer and rare diseases aims to integrate whole genomes into clinical care.

deCODE GENETICS

SIZE: 140,000 GENETIC DATA: SNPs, whole genomes

Now owned by Amgen, this pioneering Icelandic company hunted for disease-related genes in the island country.

KAISER-PERMANENTE BIOBANK

SIZE: 200,000 (500,000 planned) GENETIC DATA: SNPs

This health maintenance organization has published on telomeres and disease risks.

GEISINGER MYCODE

SIZE: 60,000 (250,000 planned) GENETIC DATA: exomes

Geisinger, a Pennsylvania health care provider, works with Regeneron Pharmaceuticals to study DNA links to disease.

VANDERBILT'S BIOVU

SIZE: 192,000 GENETIC DATA: SNPs

Focused on genes that affect common diseases and drug response, BioVU data have been permanently deidentified.

BIOBANK JAPAN

SIZE: 200,000 GENETIC DATA: SNPs

This study collected DNA from volunteers between 2003 and 2007 and is now looking at genetics of common diseases.

CHINA KADOORIE BIOBANK

SIZE: 510,000 GENETIC DATA: SNPs

This study is probing links between genetics, lifestyle and common diseases.

EAST LONDON GENES & HEALTH

SIZE: 100,000 planned GENETIC DATA: exomes

One aim is to find healthy “human knockouts”—people who lack a specific gene—in a population in which marrying relatives is common.

SAUDI HUMAN GENOME PROGRAM

SIZE: 100,000 planned GENETIC DATA: exomes

One aim of this national project is to find genes underlying rare inherited conditions.

CHILDREN'S HOSPITAL OF PHILADELPHIA

SIZE: 100,000 GENETIC DATA: SNPs, exomes

The world's largest pediatric biorepository connects DNA to the hospital's health records for studies of childhood diseases.

Wednesday, September 23, 2015

Understanding Genius: roundtable at the Helix Center, NYC



I'll be part of this roundtable discussion Saturday, Oct 3 in NYC. It's open to the general public and will be live streamed at the YouTube link above. I'm pleased to be on the panel with (among others) Dean Simonton, a UC Davis psychology professor and author of numerous books related to the theme of this meeting.
The Helix Center for Interdisciplinary Investigation
The Marianne & Nicholas Young Auditorium
247 East 82nd Street
New York, NY 10028
Understanding Genius

Schopenhauer defined genius in relation to the more conventional quality of talent. “Talent hits a target others miss. Genius hits a target no one sees.” Is originality indeed the sine qua non of genius? Is there, following Kant, a radical separation of the aesthetic genius from the brilliant scientific mind? What further distinctions might be made between different types of genius? If “The Child is father of the Man,” why don’t child prodigies always grow up to become adult geniuses?

Saturday, September 19, 2015

SNP hits on cognitive ability from 300k individuals

James Lee talk at ISIR 2015 (via James Thompson) reports on 74 hits at genome-wide statistical significance (p < 5E-8) using educational attainment as the phenotype. Most of these will also turn out to be hits on cognitive ability.

To quote James: "Shock and Awe" for those who doubt that cognitive ability is influenced by genetic variants. This is just the tip of the iceberg, though. I expect thousands more such variants to be discovered before we have accounted for all of the heritability.
74 GENOMIC SITES ASSOCIATED WITH EDUCATIONAL ATTAINMENT PROVIDE INSIGHT INTO THE BIOLOGY OF COGNITIVE PERFORMANCE 
James J Lee

University of Minnesota Twin Cities
Social Science Genetic Association Consortium

Genome-wide association studies (GWAS) have revealed much about the biological pathways responsible for phenotypic variation in many anthropometric traits and diseases. Such studies also have the potential to shed light on the developmental and mechanistic bases of behavioral traits.

Toward this end we have undertaken a GWAS of educational attainment (EA), an outcome that shows phenotypic and genetic correlations with cognitive performance, personality traits, and other psychological phenotypes. We performed a GWAS meta-analysis of ~293,000 individuals, applying a variety of methods to address quality control and potential confounding. We estimated the genetic correlations of several different traits with EA, in essence by determining whether single-nucleotide polymorphisms (SNPs) showing large statistical signals in a GWAS meta-analysis of one trait also tend to show such signals in a meta-analysis of another. We used a variety of bio-informatic tools to shed light on the biological mechanisms giving rise to variation in EA and the mediating traits affecting this outcome. We identified 74 independent SNPs associated with EA (p < 5E-8). The ability of the polygenic score to predict within-family differences suggests that very little of this signal is due to confounding. We found that both cognitive performance (0.82) and intracranial volume (0.39) show substantial genetic correlations with EA. Many of the biological pathways significantly enriched by our signals are active in early development, affecting the proliferation of neural progenitors, neuron migration, axonogenesis, dendrite growth, and synaptic communication. We nominate a number of individual genes of likely importance in the etiology of EA and mediating phenotypes such as cognitive performance.
For a hint at what to expect as more data become available, see Five Years of GWAS Discovery and On the genetic architecture of intelligence and other quantitative traits.


What was once science fiction will soon be reality.
Long ago I sketched out a science fiction story involving two Junior Fellows, one a bioengineer (a former physicist, building the next generation of sequencing machines) and the other a mathematician. The latter, an eccentric, was known for collecting signatures -- signed copies of papers and books authored by visiting geniuses (Nobelists, Fields Medalists, Turing Award winners) attending the Society's Monday dinners. He would present each luminary with an ornate (strangely sticky) fountain pen and a copy of the object to be signed. Little did anyone suspect the real purpose: collecting DNA samples to be turned over to his friend for sequencing! The mathematician is later found dead under strange circumstances. Perhaps he knew too much! ...

Friday, September 18, 2015

Bourdieu and the Economy of Symbolic Exchange


From Bobos in Paradise by David Brooks. This part of Bourdieu's oeuvre is, of course, required reading for all academics. By academics, I don't just mean humanists and social scientists. Even those in the hardest of sciences and technology would benefit from considering the political / symbolic economy of their field. Why, exactly, did most positions in top theoretical physics groups go to string theorists over a 20+ year period? See String Theory Quotes , String Theory and All That , Voting and Weighing.
The Economy of Symbolic Exchange

If a university were to offer a course of study on the marketplace of ideas, the writer who would be at the heart of the curriculum would be Pierre Bourdieu. Bourdieu is a French sociologist who is influential among his colleagues but almost entirely unread outside academia because of his atrocious prose style. Bourdieu’s aim is to develop an economy of symbolic exchanges, to delineate the rules and patterns of the cultural and intellectual marketplace. His basic thesis is that all intellectual and cultural players enter the marketplace with certain forms of capital. They may have academic capital (the right degrees), cultural capital (knowledge of a field or art form, a feel for the proper etiquette), linguistic capital (the ability to use language), political capital (the approved positions or affiliations), or symbolic capital (a famous fellowship or award). Intellectuals spend their careers trying to augment their capital and convert one form of capital into another. One intellectual might try to convert knowledge into a lucrative job; another might convert symbolic capital into invitations to exclusive conferences at tony locales; a third might seek to use linguistic ability to destroy the reputations of colleagues so as to become famous or at least controversial.

Ultimately, Bourdieu writes, intellectuals compete to gain a monopoly over the power to consecrate. Certain people and institutions at the top of each specialty have the power to confer prestige and honor on favored individuals, subjects, and styles of discourse. Those who hold this consecration of power influence taste, favor certain methodologies, and define the boundary of their discipline. To be chief consecrator is the intellectual’s dream.

Bourdieu doesn’t just look at the position an intellectual may hold at a given moment; he looks at the trajectory of a career, the successive attitudes, positions, and strategies a thinker adopts while rising or competing in the marketplace. A young intellectual may enter the world armed only with personal convictions. He or she will be confronted, Bourdieu says, with a diverse “field.” There will be daring radical magazines over on one side, staid establishment journals on another, dull but worthy publishing houses here, vanguard but underfunded houses over there. The intellectual will be confronted with rivalries between schools and between established figures. The complex relationships between these and other players in the field will be the tricky and shifting environment in which the intellectual will try to make his or her name. Bourdieu is quite rigorous about the interplay of these forces, drawing elaborate charts of the various fields of French intellectual life, indicating the power and prestige levels of each institution. He identifies which institutions have consecration power over which sections of the field.

Young intellectuals will have to know how to invest their capital to derive maximum “profit,” and they will have to devise strategies for ascent—whom to kiss up to and whom to criticize and climb over. Bourdieu’s books detail a dazzling array of strategies intellectuals use to get ahead. Bourdieu is not saying that the symbolic field can be understood strictly by economic principles. Often, he says, the “loser wins” rule applies. Those who most vociferously and publicly renounce material success win prestige and honor that can be converted into lucre. Nor does Bourdieu even claim that all of the strategies are self-conscious. He says that each intellectual possesses a “habitus,” or personality and disposition, that leads him or her in certain directions and toward certain fields. Moreover, the intellectual will be influenced, often unwillingly or unknowingly, by the gravitational pull of the rivalries and controversies of the field. Jobs will open up, grants will appear, furies will rage. In some ways the field dominates and the intellectuals are blown about within it.

Bourdieu hasn’t quite established himself as the Adam Smith of the symbolic economy. And it probably wouldn’t be very useful for a young intellectual to read him in hopes of picking up career tips, as a sort of Machiavellian Guide for Nobel Prize Wannabes. Rather, Bourdieu is most useful because he puts into prose some of the concepts that most other intellectuals have observed but have not systematized. Intellectual life is a mixture of careerism and altruism (like most other professions). Today the Bobo intellectual reconciles the quest for knowledge with the quest for the summer house.
See this comment, made 11 years ago!
Steve: We are living through a very bad time in particle theory. Without significant experimental guidance all we are left with is speculation and social dynamics driving the field. I hope things will get better when LHC data starts coming in - at least, most of the models currrently under consideration will be ruled out (although, of course, not string theory :-)

I will probably write a post at some point about how scientific fields which run through fallow experimental periods longer than 20 years (the length of a person's academic career) are in danger of falling into the traps which beset the humanities and social sciences. These were all discussed by Bourdieu long ago.

Wednesday, September 16, 2015

Gun Crazy

I grew up out in the country in Iowa. Our address was RR1 = "Rural Route 1" :-)  We had a creek, pond, dirtbike (motorcycle) track, and other fun stuff on our property. One of the things I enjoyed most was target shooting and plinking with my .22 -- I'd just walk out the back door and start shooting. With my scope zeroed in I could easily hit a squirrel at 50-100 yards from a standing position.

I haven't had a gun since I left for college, but now that I have kids and live in a gun friendly state, I thought I might get back into shooting a bit. There are two good ranges (one free) near my house, and my kids are at an age where they can learn gun safety and how to shoot. My wife disagrees, but to me knowing how to handle a gun is a basic life skill.

Gun technology has matured quite a bit since I was a kid. There is amazing stuff available at reasonable prices (red dot scopes!). With YouTube I got up to speed on the new gear really fast. You can't get a look at the internals of most guns while they are at the store, but online you can find complete disassembly videos.

This is a .22lr in an AR15 pattern (Smith and Wesson M&P 15-22, 5.5 lbs):






This is a 3 lb semi-auto .22lr (all polymer except the barrel and firing mechanism) available for just over $100 (Mossberg Blaze):




Ruger pistol, 25 shots at 5m--15m:

Thursday, September 10, 2015

Colleges ranked by Nobel, Fields, Turing and National Academies output

This Quartz article describes Jonathan Wai's research on the rate at which different universities produce alumni who make great contributions to science, technology, medicine, and mathematics. I think the most striking result is the range of outcomes: the top school outperforms good state flagships (R1 universities) by as much as a thousand times. In my opinion the main causative factor is simply filtering by cognitive ability and other personality traits like drive. Psychometrics works!
Quartz: Few individuals will be remembered in history for discovering a new law of nature, revolutionizing a new technology or captivating the world with their ideas. But perhaps these contributions say more about the impact of a university or college than test scores and future earnings. Which universities are most likely to produce individuals with lasting effect on our world?

The US News college rankings emphasize subjective reputation, student retention, selectivity, graduation rate, faculty and financial resources and alumni giving. Recently, other rankings have proliferated, including some based on objective long-term metrics such as individual earning potential. Yet, we know of no evaluations of colleges based on lasting contributions to society. Of course, such contributions are difficult to judge. In the analysis below, we focus primarily on STEM (science, technology, engineering and medicine/mathematics) contributions, which are arguably the least subjective to evaluate, and increasingly more valued in today’s workforce.

We examined six groups of exceptional achievers divided into two tiers, looking only at winners who attended college in the US. Our goal is to create a ranking among US colleges, but of course one could broaden the analysis if desired. The first level included all winners of the Nobel Prize (physics, chemistry, medicine, economics, literature, and peace), Fields Medal (mathematics) and the Turing Award (computer science). The second level included individuals elected to the National Academy of Sciences (NAS), National Academy of Engineering (NAE) or Institute of Medicine (IOM). The National Academies are representative of the top few thousand individuals in all of STEM.

We then traced each of these individuals back to their undergraduate days, creating two lists to examine whether the same or different schools rose to the top. We wanted to compare results across these two lists to see if findings in the first tier of achievement replicated in the second tier of achievement and to increase sample size to avoid the problem of statistical flukes.

Simply counting up the number of awards likely favors larger schools and alumni populations. We corrected for this by computing a per capita rate of production, dividing the number of winners from a given university by an estimate of the relative size of the alumni population. Specifically, we used the total number of graduates over the period 1966-2013 (an alternative method of estimating base population over 100 to 150 years led to very similar lists). This allowed us to objectively compare newer and smaller schools with older and larger schools.

In order to reduce statistical noise, we eliminated schools with only one or two winners of the Nobel, Fields or Turing prize. This resulted in only 25 schools remaining, which are shown below ...
The vast majority of schools have never produced a winner. #114 Ohio State and #115 Penn State, which have highly ranked research programs in many disciplines, have each produced one winner. Despite being top tier research universities, their per capita rate of production is over 400 times lower than that of the highest ranked school, Caltech. Of course, our ranking doesn’t capture all the ways individuals can impact the world. However, achievements in the Nobel categories, plus math and computer science, are of great importance and have helped shaped the modern world.

As a replication check with a larger sample, we move to the second category of achievement: National Academy of Science, Engineering, or Medicine membership. The National Academies originated in an Act of Congress, signed by President Abraham Lincoln in 1863. Lifetime membership is conferred through a rigorous election process and is considered one of the highest honors a researcher can receive.
The results are strikingly similar across the two lists. If we had included schools with two winners in the Nobel/Fields/Turing list, Haverford, Oberlin, Rice, and Johns Hopkins would have been in the top 25 on both. For comparison, very good research universities such as #394 Arizona State, #396 Florida State and #411 University of Georgia are outperformed by the top school (Caltech) by 600 to 900 times. To give a sense of the full range: the per capita rate of production of top school to bottom school was about 449 to one for the Nobel/Fields/Turing list and 1788 to one for the National Academies list. These lists include only schools that produced at least one winner—the majority of colleges have produced zero.

What causes these drastically different odds ratios across a wide variety of leading schools? The top schools on our lists tend to be private, with significant financial resources. However, the top public university, UC Berkeley, is ranked highly on both lists: #13 on the Nobel/Fields/Turing and #31 on the National Academies. Perhaps surprisingly, many elite liberal arts colleges, even those not focused on STEM education, such as Swarthmore and Amherst, rose to the top. One could argue that the playing field here is fairly even: accomplished students at Ohio State, Penn State, Arizona State, Florida State and University of Georgia, which lag the leaders by factors of hundreds or almost a thousand, are likely to end up at the same highly ranked graduate programs as individuals who attended top schools on our list. It seems reasonable to conclude that large differences in concentration or density of highly able students are at least partly responsible for these differences in outcome.

Sports fans are unlikely to be surprised by our results. Among all college athletes only a few will win professional or world championships. Some collegiate programs undoubtedly produce champions at a rate far in excess of others. It would be uncontroversial to attribute this differential rate of production both to differences in ability of recruited athletes as well as the impact of coaching and preparation during college. Just as Harvard has a far higher percentage of students scoring 1600 on the SAT than most schools and provides advanced courses suited to those individuals, Alabama may have more freshman defensive ends who can run the forty yard dash in under 4.6 seconds, and the coaches who can prepare them for the NFL.

One intriguing result is the strong correlation (r ~ 0.5) between our ranking (over all universities) and the average SAT score of each student population, which suggests that cognitive ability, as measured by standardized tests, likely has something to do with great contributions later in life. By selecting heavily on measurable characteristics such as cognitive ability, an institution obtains a student body with a much higher likelihood of achievement. The identification of ability here is probably not primarily due to “holistic review” by admissions committees: Caltech is famously numbers-driven in its selection (it has the highest SAT/ACT scores), and outperforms the other top schools by a sizeable margin. While admission to one of the colleges on the lists above is no guarantee of important achievements later in life, the probability is much higher for these select matriculants.

We cannot say whether outstanding achievement should be attributed to the personal traits of the individual which unlocked the door to admission, the education and experiences obtained at the school, or benefits from alumni networks and reputation. These are questions worthy of continued investigation. Our findings identify schools that excel at producing impact, and our method introduces a new way of thinking about and evaluating what makes a college or university great. Perhaps college rankings should be less subjective and more focused on objective real world achievements of graduates.
For analogous results in college football, see here, here and here. Four and Five star recruits almost always end up at the powerhouse programs, and they are 100x to 1000x more likely to make it as pros than lightly recruited athletes who are nevertheless offered college scholarships.

Wednesday, September 09, 2015

Frozen in time


Although we have whole genome reconstructions, we don't really know what Neanderthals looked like, exactly. It would be amazing to discover an intact individual.
National Geographic: As global warming causes ice sheets and glaciers to retreat, it will be “very, very likely” that a well-preserved Neanderthal will one day emerge, says Hiebert, much like the 40,000-year-old baby mammoth found in Siberia.
See also The Neanderthal Problem.

Monday, September 07, 2015

Meritocracy and DNA

Read Toby Young's new article The Fall of the Meritocracy. The adoption of civil service and educational placement examinations in England and France in the 19th century was consciously based on the Chinese model (see Les Grandes Ecoles Chinoises). Although imperfect, exams work: we have no better system for filtering talent. But few are willing to acknowledge this basic fact.
Quadrant Magazine: In 1958, my father, Michael Young, published a short book called The Rise of the Meritocracy, 1870–2023: An Essay on Education and Equality. It purported to be a paper written by a sociologist in 2034 about the transformation of Britain from a feudal society in which people’s social position and level of income were largely determined by the socio-economic status of their parents into a modern Shangri-La in which status is based solely on merit. He invented the word meritocracy to describe this principle for allocating wealth and prestige and the new society it gave rise to.

The essay begins with the introduction of open examinations for entry into the civil service in the 1870s—hailed as “the beginning of the modern era”—and continues to discuss real events up until the late 1950s, at which point it veers off into fantasy, describing the emergence of a fully-fledged meritocracy in Britain in the second half of the twentieth century. In spite of being semi-fictional, the book is clearly intended to be prophetic—or, rather, a warning. Like George Orwell’s Nineteen Eighty-Four (1949), The Rise of the Meritocracy is a dystopian satire that identifies various aspects of the contemporary world and describes a future they might lead to if left unchallenged. Michael was particularly concerned about the introduction of the 11+ by Britain’s wartime coalition government in 1944, an intelligence test that was used to determine which children should go to grammar schools (the top 15 per cent) and which to secondary moderns and technical schools (the remaining 85 per cent). It wasn’t just the sorting of children into sheep and goats at the age of eleven that my father objected to. As a socialist, he disapproved of equality of opportunity on the grounds that it gave the appearance of fairness to the massive inequalities created by capitalism. He feared that the meritocratic principle would help to legitimise the pyramid-like structure of British society.

In the short term, the book achieved its political aim. It was widely read by Michael’s colleagues in the Labour Party (he ran the party’s research department from 1945 to 1951) and helped persuade his friend Anthony Crosland, who became Labour Education Secretary in 1965, that the 11+ should be phased out and the different types of school created by the 1944 Education Act should be replaced by non-selective, one-size-fits-all comprehensives. Crosland famously declared: “If it’s the last thing I do, I’m going to destroy every f***ing grammar school in England. And Wales and Northern Ireland.” Today, there are only 164 grammar schools in England and sixty-eight in Northern Ireland. There are none in Wales.

But even though my father’s book helped to win the battle over selective education, he lost the war. The term “meritocracy” has now entered the language, and while its meaning hasn’t changed—it is still used to describe the organising principle Michael identified in his book—it has come to be seen as something good rather than bad.[1] The debate about grammar schools rumbles on in Britain, but their opponents no longer argue that a society in which status is determined by merit is undesirable. Rather, they embrace this principle and claim that a universal comprehensive system will lead to higher levels of social mobility than a system that allows some schools to “cream skim” the most intelligent children at the age of eleven.[2]

We are all meritocrats now

Not only do pundits and politicians on all sides claim to be meritocrats—and this is true of most developed countries, not just Britain—they also agree that the principle remains stillborn. In Britain and America there is a continuing debate about whether the rate of inter-generational social mobility has remained stagnant or declined in the past fifty years, but few think it has increased.[3] The absence of opportunities for socio-economic advancement is now seen as one of the key political problems facing Western democracies, leading to the moral collapse of the indigenous white working class, the alienation of economically unsuccessful migrant groups, and unsustainable levels of welfare dependency. This cluster of issues is the subject of several recent books by prominent political scientists, most notably Our Kids: The American Dream in Crisis (2015) by Robert Putnam.

Unlike my father, I’m not an egalitarian. As Friedrich Hayek and others have pointed out, the difficulty with end-state equality is that it can only be achieved at too great a human cost. Left to their own devices, some men will inevitably accumulate more wealth than others, whether through ability or luck, and the only way to “correct” this is through the state’s use of coercive power. If the history of the twentieth century teaches us anything, it is that the dream of creating a socialist utopia often leads to the suppression of free speech, the imprisonment of a significant percentage of the population and, in some extreme cases, state-organised mass murder.

Having said that, I recognise that a lack of social mobility poses a threat to the sustainability of liberal democracies and, in common with many others, believe the solution lies in improving our education systems. There is a consensus among most participants in the debate about education reform that the ideal schools are those that manage to eliminate the attainment gap between the children of the rich and the poor. That is, an education system in which children’s exam results don’t vary according to the neighbourhood they’ve grown up in, the income or education of their parents, or the number of books in the family home. Interestingly, there is a reluctance on the part of many liberal educationalists to accept the corollary of this, which is that attainment in these ideal schools would correspond much more strongly with children’s natural abilities. [4] This is partly because it doesn’t sit well with their egalitarian instincts and partly because they reject the idea that intelligence has a genetic basis. But I’m less troubled by this. I want the clever, hard-working children of those in the bottom half of income distribution to move up, and the less able children of those in the top half to move down. ...
Young follows the meritocracy and social justice argument where it ultimately leads: to DNA.
... However, there’s a problem here—let’s call it the challenge posed by behavioural genetics—which is that cognitive ability and other characteristics that lead to success, such as conscientiousness, impulse control and a willingness to defer gratification, are between 40 per cent and 80 per cent heritable.[5] I know that many people will be reluctant to accept that, but the evidence from numerous studies of identical twins separated at birth, as well as non-biological siblings raised in the same household, is pretty overwhelming. And it’s probable that in the next few years genetic research scientists will produce even more evidence that important aspects of people’s personalities—including those that determine whether they succeed or fail—are linked to their genes, with the relevant variants being physically identified. The implication is that a society in which status is allocated according to merit isn’t much fairer than one in which it’s inherited—or, rather, it is partly inherited, but via parental DNA rather than tax-efficient trusts. This is an argument against meritocracy made by John Rawls in A Theory of Justice (1971): You’ve done nothing to deserve the talents you’re born with—they’re distributed according to a “natural lottery”—so you don’t deserve what flows from them.[6]

It’s worth pausing here to note that Rawls accepts that not all men are born equal, genetically speaking. Some do better out of the “natural lottery” than others and that, in turn, has an impact on their life chances. This is far from universally accepted by liberal commentators and policy-makers, most of whom prefer to think of man as a tabula rasa, forged by society rather than nature. ...

... The reason liberals are so hostile to the concept of IQ—and particularly the claim that it helps to determine socio-economic status, rather than vice versa—is because they have an almost religious attachment to the idea that man is a piece of clay that can be moulded into any shape by society. After all, it’s only if human beings are infinitely malleable and not bound by their inner nature that the various utopias they dream of can become a reality, from William Morris’s Earthly Paradise to the New Jerusalem of my father’s Labour Party. ...

... But the new technologies thrown up by genetic research will mean they no longer have to deny this obvious truth. If it becomes possible to select human embryos according to their possession of genes associated with certain character traits, such as intelligence, the Left’s utopian political projects can be resurrected. Margaret Mead was right after all: human nature is almost unbelievably malleable, you just have to start a lot further back. It is not through changing the culture that we will be able to solve the chronic social problems besetting the advanced societies of the West, but through changing people’s genes. ...

Thursday, September 03, 2015

Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve.


I hope you enjoy my essay in the new issue of the science magazine Nautilus (theme: the year 2050), which discusses the co-evolution of humans and machines as we advance in both AI and genetic technologies. My Nautilus article from 2014: Super-Intelligent Humans Are Coming.
Nautilus: ... AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.

... These two threads—smarter people and smarter machines—will inevitably intersect. Just as machines will be much smarter in 2050, we can expect that the humans who design, build, and program them will also be smarter. Naively, one would expect the rate of advance of machine intelligence to outstrip that of biological intelligence. Tinkering with a machine seems easier than modifying a living species, one generation at a time. But advances in genomics—both in our ability to relate complex traits to the underlying genetic codes, and the ability to make direct edits to genomes—will allow rapid advances in biologically-based cognition. Also, once machines reach human levels of intelligence, our ability to tinker starts to be limited by ethical considerations. Rebooting an operating system is one thing, but what about a sentient being with memories and a sense of free will?

... AI research also pushes even very bright humans to their limits. The frontier machine intelligence architecture of the moment uses deep neural nets: multilayered networks of simulated neurons inspired by their biological counterparts. Silicon brains of this kind, running on huge clusters of GPUs (graphical processor units made cheap by research and development and economies of scale in the video game industry), have recently surpassed human performance on a number of narrowly defined tasks, such as image or character recognition. We are learning how to tune deep neural nets using large samples of training data, but the resulting structures are mysterious to us. The theoretical basis for this work is still primitive, and it remains largely an empirical black art. The neural networks researcher and physicist Michael Nielsen puts it this way:
... in neural networks there are large numbers of parameters and hyper-parameters, and extremely complex interactions between them. In such extraordinarily complex systems it’s exceedingly difficult to establish reliable general statements. Understanding neural networks in their full generality is a problem that, like quantum foundations, tests the limits of the human mind.
... It may seem incredible, or even disturbing, to predict that ordinary humans will lose touch with the most consequential developments on planet Earth, developments that determine the ultimate fate of our civilization and species. Yet consider the early 20th-century development of quantum mechanics. The first physicists studying quantum mechanics in Berlin—men like Albert Einstein and Max Planck—worried that human minds might not be capable of understanding the physics of the atomic realm. Today, no more than a fraction of a percent of the population has a good understanding of quantum physics, although it underlies many of our most important technologies: Some have estimated that 10-30 percent of modern gross domestic product is based on quantum mechanics. In the same way, ordinary humans of the future will come to accept machine intelligence as everyday technological magic, like the flat screen TV or smartphone, but with no deeper understanding of how it is possible.

New gods will arise, as mysterious and familiar as the old.

Leadership


I was asked recently to write something about my leadership style / management philosophy. As a startup CEO I led a team of ~35, and now my office has something like 350 FTEs. Eventually, hands on leadership becomes impossible and one needs general principles that can be broadly conveyed.
I have a “no drama” leadership style. We try to be as rational and unbiased as possible in making decisions, always working in the long term interests of the institution and to advance human knowledge. I ask that everyone on my team try to understand all sides of a difficult issue to the point that they can, if asked, effectively argue other perspectives. This exercise helps overcome cognitive biases. My unit tries to be entirely “transparent” -- we want other players at the university to understand the rationale and evidence behind our specific decisions. We want our resource allocations to be predictable, justifiable, and as free from petty politics as possible. Other units view members of my team as effective professionals who can be relied on to do the right thing.
One of the toughest aspects of my current job is the wide variety of things I have to look at -- technologies and research projects across the spectrum from biomedical to engineering to fundamental physics to social science and the humanities. Total NSF + DOE funding at MSU ranks in the top 10 (very close to top 5) among US universities.

The most important principle I advance to my senior staff is epistemic caution together with pragmatism.

See also this interview (startups) and Dale Carnegie: How to Win Friends and Influence People :-)