Wednesday, February 13, 2013

Eric, why so gloomy?

Eric Turkheimer wrote a blog post reacting to my comments (On the verge) about some recent intelligence GWAS results.

I'm an admirer of Eric's work in behavior genetics, as you can tell from this 2008 post The joy of Turkheimer. Since then we've gotten to know each other via the internet and have even met at a conference.

Eric is famous for (among other things) his Gloomy Prospect:
The question is not whether there are correlations to be found between individual genes and complex behavior— of course there are — but instead whether there are domains of genetic causation in which the gloomy prospect does not prevail, allowing the little bits of correlational evidence to cohere into replicable and cumulative genetic models of development. My own prediction is that such domains will prove rare indeed, and that the likelihood of discovering them will be inversely related to the complexity of the behavior under study.
He is right to be cautious about whether discovery of individual gene-trait associations will cohere into a satisfactory explanatory or predictive framework. It is plausible to me that the workings of the DNA program that creates a human brain are incredibly complex and beyond our detailed understanding for some time to come.

However, I am optimistic about the prediction problem. There are good reasons to think that the linear term in the model described below gives the dominant contribution to variation in cognitive ability:

The evidence comes from estimates of additive (linear) variance in twin and adoption studies, as well as from evolutionary theory itself. Fisher's Fundamental Theorem of Natural Selection identifies additive variance as the main driver of evolutionary change in the limit where selection timescales are much longer than recombination (e.g., due to sexual reproduction) timescales. Thus it is reasonable to expect that most of the change in genus Homo intelligence over the last millions of years is encoded in a linear genetic architecture.

GWAS, which identify causal loci and their effect sizes, are in fact fitting the parameters of the linear model that appears in the slide above. (Most effect sizes x_i will be zero, with perhaps 10k non-zero entries distributed according to some kind of power law.) Once we have characterized loci accounting for most of the variance, we will be able to predict phenotypes based only on genotype information (i.e., without further information about the individual). This is the genomic prediction problem which has already been partially solved for inbred lines of domesticated plants and animals. My guess is that it will be solved for humans once of order millions of genotype-phenotype pairs are available for analysis. Understanding the nonlinear parts will probably take much more data, but these are likely to be subleading effects.


Richard Seiter said...

In the context of gloominess about genomic prediction, I just ran across this paper today:
It compares prediction of height using genomics to prediction using midparent.values. From the abstract:

"For highly heritable traits such as height, we conclude that in applications in which parental phenotypic information is available (eg, medicine), the Victorian Galton’s method will long stay unsurpassed, in terms of both discriminative accuracy and costs. For less heritable traits, and in situations in which parental information is not available (eg, forensics), genomic methods may provide an alternative, given that the variants determining an essential proportion of the trait’s variation can be identified."

Any thoughts? I found that paper a bit humbling for our modern techniques. I wonder how much of the environmental component of height mid-parent height manages to incorporate (assuming parent-offspring environments are correlated). Of course, this does raise the question of what kinds of predictions can be done using a combination of methods...

steve hsu said...

This paper is from 2009 -- they used only 54 SNPs for height. The number of genome wide significant hits today is approaching 1000. Also, there are applications of genomic prediction for which parental midpoint is useless, such as differentiating between zygotes.

ben_g said...

Why would the researchers who did look at childhood IQ.. It's much less heritable in children than adults:

HughLygon said...

It's what was available.

HughLygon said...

Steve will be a eugenicist yet.

HughLygon said...

"This is the genomic prediction problem which has already been partially solved for inbred lines of domesticated plants and animals."

It might also be solved for the ability to play football or American football corner back or marathoner, etc. But what counts as a great athlete depends on the place. Football in Brazil, American football in Texas, and marathoning in the Mountains of the moon

Odysseus323 said...

The research you discuss on the genetics of intelligence is fascinating, and hopefully could lead eventually to the enhancement of human capabilities.

But for the benefit of us adults who have already been born and can't benefit from genetic engineering, I'd be curious to know how optimistic you are about the feasibility of directly manipulating phenotypic intelligence. I'm thinking of such interventions as drugs, transcranial direct current stimulation (tDCS), or cognitive exercises such as working memory training. Although I'm not an academic myself, the papers I've seen on the neurophysiology of intelligence suggest that it could be linked to fairly simple physiological variables like the thickness of myelin sheaths, and to my admittedly non-expert mind this seems to raise the plausibility that intelligence-boosting interventions could be found. Thoughts?

ReallyG said...

Look. The evidence for general factor of athleteness is manifold. Being better at one sport / activity is positively correlated with being better at all sports. QED g exists.

Blog Archive