Sunday, April 07, 2013

Myths, Sisyphus and g

As a punishment, Sisyphus was made to roll a huge boulder up a steep hill. Before he could reach the top, however, the massive stone would always roll back down, forcing him to begin again.

I recommend this well written refutation of Cosma Shalizi's much loved (in certain quarters) g, a Statistical Myth, an attack on the general factor of intelligence. Over the years I have not encountered a single endorser of Shalizi's article who actually understands the relevant subject matter. His article is loved for its reassuring conclusions, not the strength of its arguments. I am sure many "thinkers" resisted Darwinism, the abandonment of geocentrism, and even the notion that the Earth is a sphere, for similar psychological reasons. Some pessimists (speaking, for example, of the quantum revolution in the early 20th century) remarked that science advances one funeral at a time, as the older generation passes away in favor of the next, more open-minded, one. In the case of g it appears we have regressed significantly under relentless attack; social science papers from 50 years ago often seem more clear headed and precise than ones I read today. All battles must be fought and refought again a decade or two later.

As I write here:
We can (crudely) measure cognitive ability using simple tests. (It is amazing to me that this is a controversial statement.) Randomly sampled eminent scientists have (very) high IQs, and given the observed stability of adult IQ the causality is clear ...
Optimistically, we are only a decade away from genomic prediction of g scores (see Eric, why so gloomy?). The existence of such a predictor may allow us to finally push the boulder to the top, and keep it there.

As I mention in talks on this subject, the fact that cognitive abilities reliably have positive correlation is highly nontrivial. Add to this the well-established validity and stability of g and you have a construct that must be taken seriously. See also IQ, Compression and Simple Models.
Is Psychometric g a Myth?

...

V. Conclusions

Shalizi’s first error is his assertion that cognitive tests correlate with each other because IQ test makers exclude tests that do not fit the positive manifold. In fact, more or less the opposite is true. Some of the greatest psychometricians have devoted their careers to disproving the positive manifold only to end up with nothing to show for it. Cognitive tests correlate because all of them truly share one or more sources of variance. This is a fact that any theory of intelligence must grapple with.

Shalizi’s second error is to disregard the large body of evidence that has been presented in support of g as a unidimensional scale of human psychological differences. The g factor is not just about the positive manifold. A broad network of findings related to both social and biological variables indicates that people do in fact vary, both phenotypically and genetically, along this continuum that can be revealed by psychometric tests of intelligence and that has has widespread significance in human affairs.

Shalizi’s third error is to think that were it shown that g is not a unitary variable neurobiologically, it would refute the concept of g. However, for most purposes, brain physiology is not the most relevant level of analysis of human intelligence. What matters is that g is a remarkably powerful and robust variable that has great explanatory force in understanding human behavior. Thus g exists at the behavioral level regardless of what its neurobiological underpinnings are like.

In many ways, criticisms of g like Shalizi’s amount to “sure, it works in practice, but I don’t think it works in theory”. Shalizi faults g for being a “black box theory” that does not provide a mechanistic explanation of the workings of intelligence, disparaging psychometric measurement of intelligence as a mere “stop-gap” rather than a genuine scientific breakthrough. However, the fact that psychometricians have traditionally been primarily interested in validity and reliability is a feature, not a bug. Intelligence testing, unlike most fields of psychology and social science, is highly practical, being widely applied to diagnose learning problems and medical conditions and to select students and employees. What is important is that IQ tests reliably measure an important human characteristic, not the particular underlying neurobiological mechanisms. Nevertheless, research on general mental ability extends naturally into the life sciences, and continuous progress is being made in understanding g in terms of neurobiology (e.g., Lee et al. 2012, Penke et al. 2012, Kievit et al. 2012) and molecular genetics (e.g., Plomin et al., in press, Benyamin et al., in press).

27 comments:

anonymous said...

Steve, how do you account for the fact that one of the greatest physicists was unable to do math at high school level for his time, and yet he contributed a lot more than his more mathematically inclined peers? what would be your guess for his IQ score? The gentleman I'm referring to is Michael Faraday.

Jeremy Nimmo said...

"we are only a decade away from genomic prediction of g scores".. and following that, embryo selection. Which will make for some truly beautiful hubris from the the elites who now push the intelligence-egalitarianism garbage.

Hauser Quaid said...

Let's say we have a set of programmers. Goal of those programmers is to make a killer application for iPhone or Android market. They are highly motivated, they want to make money and it's fun too. Now, one programmer constantly has a success, he makes lot of money, the others don't have any or low success although everyone agrees that they are good programmers.

Now, what distinguishes this programmer from the rest of the programmers in a given set of programmers? Notice that they were all given a problem to make a killer application that will make them rich, so it's a problem, as it's a problem to find a correct 3d representation of 2d object given in IQ tests. Only this time the problem is much more complex and solution is very unclear. While you could argue that that programmer does probably have IQ at least 1sd higher than population average, you can't answer the question, just by knowing IQs of programmers which one (if any) will solve this problem correctly.

You can make similar arguments for music, there are lot of musicians out there who want to make money, but only few can, there are lot of guitarists with stellar techniques but only few sound good. The same goes for any other human activity.

This discrepancy is always trivialized by people who think that IQ = intelligence. Sure, we know that the next Bill Gates will have high IQ, but it would be very stupid to search for one just by measuring IQs, or at least we could acknowledge that method is extremely uneffective.

The key word is "crude", which is very often lost.

Basically what they're doing is reducing a multidimensional vector (where you don't even know the number of dimensions!) to a scalar and trivialize the information that gets lost in the process. What you see in some cases is that some disciplines have IQ cutoffs (physics, computer science...). Of course there will be some disciplines as in those cases some projections of multidimensional intelligence space are more relevant for solving those types of problems but always have in mind that that's not all, that's not what makes Magnus Carlsen, unique one in 30 years talent, there is much more.

Why does average IQ of a population matter (although it's not the only condition, look at the North Korea)? We can look at a population as set of agents that forms a society, lower average of some traits gives less success. It's something like predicting that Japanese, will never be good at basketball only by measuring their height.



Why is it politically incorrect to say it? Well some subsets of population would lose a right to play some of their cards.

tractal said...

They will, of course, be the first in line.

Iamexpert said...

Why does it matter whether g exists or not? I'm aware of no g factor in inter-species intelligence, but no one disputes that humans are the smartest animal. The ranking of intelligence does not depend on the existence of g.

botti said...

See Linda Gottfredson's summary. http://www.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

Pincher Martin said...

Hauser,

"This discrepancy is always trivialized by people who think that IQ = intelligence. Sure, we know that the next Bill Gates will have high IQ, but it would be very stupid to search for one just by measuring IQs, or at least we could acknowledge that method is extremely uneffective."

And yet interestingly enough, Bill Gates' corporation was well-known for using a type of IQ test to screen potential employees. And I've heard Google does as well. What do you know that those two companies don't?

The way you frame the issue by referring to Microsoft's founder is also vague. What do you mean by "the next Bill Gates"? Programming skill? Entrepreneurial skill? Business acumen married to computer knowledge? Or just developing the next killer app? (Which is something I don't think Gates had a flair for.)

However you define it, you'd do well to screen young programmers for IQ before you did anything else. Crude though that measure may be, it's still significantly more accurate than any other single predictive measure you're likely to come up with. The next Bill Gates is highly unlikely to be found in the young programming population with IQs lower than 2 SDs above the mean. Gates' SAT score of 1590 in the early seventies suggest his IQ was around the 4 SD mark.

If we were tasked with looking for the next LeBron James among today's 17-year-old basketball players, screening by height would be a very crude measure indeed. But it would also help us save a lot of time by eliminating many very good high school players whose physical limitations aren't as evident in the high school game as they would be the pro game.

Emil Kirkegaard said...

I never heard of either. Thanks Steve! :)

Always good that one can take a complete course in psychometrics online for free. The only thing I haven't explored in detail yet are the details of various factors analyses methods. Jensen (1998) doesn't really cover this is enough detail for it to be useful in practice.

Emil Kirkegaard said...

I recall reading multiples references to papers finding g-factors in tests of animals as well. And there is at least one book ranking the various races of dogs based on teachable, which is very related to g. I'm sure you can find stuff if you google/scholar it. :)

Hauser Quaid said...

No one in their right mind would compare a random tall person with LeBron James, on the other hand we see members of MENSA always compared to Einstein: "Oh, his IQ is 170, he's a genius, he has a higher IQ than Einstein". We know that the next LeBron will very likely be about 2m high, as we know the next Einstein will have a high IQ, but it doesn't work the other way around. As I said, IQ gets too often equated with intelligence, and every time (most of the time) you don't get a genious from a person with high IQ, the reasons are often trivialized, usually "he didn't work hard enough, he didn't had a drive" and similar nonsense.


Google and Microsoft are NOT hiring people solely based on their IQs. That screen is only applied once you satisfy the lot of other non trivial conditions. It's something like, when someone tells a coach "here's a very talented player", only then will he look at the players height although we know that height is a important factor in basketball.


Mine example might be somewhat vague, but that is part of a point, genius behaviour can't (at least not yet) be reproduced in some systematic way, it's not easily seen what constitutes a genius. I did however defined a problem very clearly (set of programmers needed to make an application that people wanted to buy), Bill Gates example comes as another example, not as an example of a killer app programmer.


There are lot of examples of people we think of as geniuses where their cognitive abilities are at a level of 4 year old kid if you put them away from their field of expertise. Look at the chess player Bobby Fischer, he was a delusional lunatic, who had deep misunderstanding about how world outside of chess worked, look at the 68 year old physicist from the previous posts who thought one of the hottest bikini models out there wanted to be with him, and was caught carrying drugs as a result of that delusion.


While I agree that IQ is an useful concept and I find prof. Hsu work very interesting, and I also think that he'll find something in genes that is responsible for high IQs, I strongly disagree that intelligence is a one dimensional (although clearly certain talents do tend to be correlated).

Richard Seiter said...

Richard Branson is an interesting example (not sure if that is why you picked him). My understanding is he is dyslexic which would cause trouble in school and with (most?) IQ tests (does anyone know how people with dyslexia do on Raven's?). I think identifying circumstances (be they cultural, physical, or whatever) that cause IQ tests to give a misleading impression of intelligence is important. Dyslexia seems to be a major cause of people with ability falling through the cracks in the educational system.

I don't think anyone here is arguing that intelligence is one dimensional--more that the one dimensional quantity g does a surprisingly good job of capturing most of the variance in various measures of intelligence (and IQ seems to be the best proxy we have for g). As far as using IQ for talent identification, what would you propose as an alternative? Do you think any of the other criteria Google and Microsoft are using are negative or uncorrelated with IQ? What do you think about the US military using the ASVAB AFQT result as an enlistment criterion? (I think over half the US population is not eligible for enlistment given that criterion. I'm surprised that doesn't generate more of a response from the egalitarians out there--Hauser, I am not implying anything about you by this statement)

The comparison of IQ and height as necessary but not sufficient conditions for genius and basketball ability is interesting. One point about that that strikes me is that a number of physical attributes that help basketball skill are (I believe) negatively correlated (e.g. height/weight and speed/vertical leap). It's pretty easy to add to the list of physical criteria and get a better screen than height alone. Can you do the same for intellectual ability and IQ? (note I am talking about easily measurable objective quantities)

Iamexpert said...

g might explain why some dogs are smarter than other dogs and why some humans are smarter than other humans, but does g explain why humans are smarter than dogs? Humans are indisputably the smartest animal regardless of whether g explains interspecies cognitive variation, so the question is, why does it matter whether g exists or not?

pnard said...

It matters for coming up with an accurate model of intelligence. Another question is, why do some people seem so intent on showing it doesn't exist?

Iamexpert said...

I'm not sure how g's existence or lack of improves the accuracy of intelligence models though it does make intelligence easier to measure (in most people). People don't want g to exist because they don't want their or someone else's poor performance on IQ tests to have implications beyond whatever specific abilities the test tests. But it's important to create a model of intelligence that goes beyond g, because I'm not sure if g has much meaning among the neurologically abnormal or for interspecies comparisons, let alone artificial intelligence.

Emil Kirkegaard said...

People with dyslexia do normally on Raven's. This makes them useful for identifying gifted but dyslexic students.

Emil Kirkegaard said...

I'm not sure I even know what it means to say that g does or does not exist, aside from that it always shows up if one factors batteries of cognitive tests, and that it is the primary factor in cognitive tests to predict anything (the active ingredient, in Jensen's words).

Surely, if use teachability tests then humans will learn much faster than even the smartest dogs. Probably, this test has a too low ceiling for almost all humans, with the only ones failing being the severely retarded or brain-damaged ones.

Iamexpert said...

Emil it's not enough to say humans would outscore dogs on a learning test. In order to prove g transcends species you would have to show that the human margin of superiority over dogs varies as a function of the tests g loading (as calculated in both human and dog samples). Btw there are some cognitive tasks where chimps outperform humans.

Emil Kirkegaard said...

Is it important that we have some concept of intelligence that works cross-species, in which I will include AI? If so, why?

Joshua W. Burton said...

Just to disambiguate a bit further, because I'm not saying much (neither is Cosma!) and don't want to be overinterpreted through notational confusion as saying more:

(1) The first principal component PC1 of available tests explains most of their variance. Well defined, and true by definition.
(2) PC1 is a linear combination of the original tests, and a PC1 value can be inferred from any set of test scores. Well defined, and trivial.
(3) A single trait, call it g, underlies these test scores, so that off-diagonal minors (Spearman's tetrads) all vanish and partial correlations are consistent with zero when g is held constant. Well defined, and crushingly refuted by the data.

(3a) A low-dimensional family of traits, collectively approximating some fuzzy g, underlies these test scores while data-fitting the second-order off-diagonal minors. Probably true, for some sufficiently small value of "true" and sufficiently large value of "low"; here, I defer to Cosma and assume his study of the literature is honest. This is a pretty weak claim in any case.
(4) There is an "underlying cause of g." False, because mythical things don't have causes.
(5) There is an underlying cause of variation in PC1. See (1) and (2).
(6) There is an underlying cause of variation in human cognitive ability. No idea what I think of this, until you give me a definition of human cognitive ability that is empirical and that doesn't reduce tautologically to PC1.
(7) There is an underlying cause of very prolonged discussion about g. Probably true in some sociological sense, but not a subject I'm very interested in.

steve hsu said...

I'm not a "g man" so I don't endorse the stronger interpretations of g. But I think it's obvious that we can crudely measure a quantity that is related to the folk notion of intelligence, and which is stable, predictive (of certain life outcomes) and highly heritable (these are empirical facts). Most people who cite Shalizi's essay think they are launching a devastating attack on the basic idea of intelligence testing, or genetic causation of quality of brain function. None of this depends at all on whether there is a "unitary g" (i.e., your point (3)). Shalizi's essay says nothing about the points in my second sentence above, but does reveal his overly strong priors and those of his fans.

Joshua W. Burton said...

In short, you have no actual quarrel with Shalizi and Glymour, but only with some (mis)citers you have met. That's fine.

Heritability is another word that means a lot less when used carefully than it does in casual speech. For example, zipcode is moderately heritable (if I know nothing about you but your parents’ zipcode, I have a far-better-than-random statistical chance of guessing yours) but not genetically determined. Number of fingers is 99%+ genetically determined (nearly all of us have ten) but almost negligibly heritable (knowing that your dad is missing a finger only very slightly improves my chances of guessing whether you are).

Accent is strongly heritable, innately linked to higher mental function, negligibly malleable after early childhood, and strongly correlated with achievement, class, race, and diet. But its congenital components (hereditary deafness, etc.) play only a peripheral causal role in establishing these features. Again, it may (or may not) be possible to say something about IQ's innateness and lack of plasticity that distinguishes it from accent, but that necessarily won't be one of the things I just said, because accent sails through all those cuts.



I don't see what criticizing a man's priors has to do with science. Science is the part you can get to with the right data even from the wrong priors; improving the Bayesian result by moving the priors while holding the data constant is rhetoric, a different subject.

Joshua W. Burton said...

Speaking of rhetoric, I'd be annoyed if a heckler of special relativity grudgingly mumbled "I'm not a luminiferous ether man," generations after Michelson-Morley. Spearman himself gave up on unitary g in, what, 1932?

steve hsu said...

Josh, sorry I've taken so long to reply to your comments. If you want to know my views on this stuff in more detail I recommend this talk: http://infoproc.blogspot.com/2013/03/genetic-architecture-of-intelligence.html

Joshua W. Burton said...

Thanks. My interest in the subject basically begins and ends with the stats, but I'll take a look!

jay parisi said...

You say, "cognitive tests correlate because all of them truly share one 'or more' sources of variance". Yes, Shalizi's whole point was to show their is 'more than one source of underlying (global) variance', not simply 'g'. On YOUR second error, in contrast, there is plenty of evidence that the g factor, isn't the only source of (global) variance (obviously subtest variance, is due to seperate factors: autistics have taught us that). Often patterns in subtest scores, contridict the more highly g loaded subtests . For example, it is no unlikely for someone to score near the 50 percentile in the highly g loaded measure of 'concept formation (woodcock johnson iii), yet have almost their scores withing a few points of the 75 percentile.

Perhaps, more importanly, it is already been shown that subtests predict 'g' (probably one factor underlying global variance) differently in certain sub-populations. Therefore, the whole practice of weighting (g loading), is questionable.

steve hsu said...

I think the last point you made about variation of g loadings by population is controversial, at least from what I have heard second hand from psychometricians.

In any case, the pragmatic view of g is that, however it was assembled or defined, the important question is whether it has *validity* (i.e., predictive power).

http://infoproc.blogspot.com/2009/11/iq-compression-and-simple-models.html

Alex said...

People don't want g to exist because it means the success of an individual is genetically determined, or at least people are born with innate advantages over other people. Add to this the disparities among races and it becomes the dream for white geniuses who are automatically "superior" thanks to their high IQ genes.

Blog Archive

Labels