Tuesday, January 31, 2012

Some recommended reading

Robert Wald reviews the 2010 book Many Worlds?: Everett, Quantum Theory, and Reality, based on meetings at Oxford and at the Perimeter Institute, commemorating the 50th anniversary of Everett's paper.

A central issue in the Everett interpretation is the status of the `Born rule', which asserts that, for state ψ, the probability of obtaining a particular outcome of a measurement is ||Pψ||2, where P is the projection operator onto the eigensubspace associated with the measurement outcome. In traditional interpretations, the Born rule is simply postulated as part of the collapse hypothesis. In the Everett interpretation, it is far from obvious that the Born rule even has any meaning—if all outcomes occur, how can one talk about the probability of a particular outcome? Given the importance of this issue, it is highly appropriate that four chapters of the book (by Saunders, Papineau, Wallace, and Greaves and Myrvold) are devoted to addressing probability and the Born rule from the Everett viewpoint, and three chapters (by Kent, Albert, and Price) are devoted to criticising these views.

... In any case, if the conclusion of a mathematically correct argument is that rational decision strategies require the Born rule, then there must be quite a bit lying in the assumptions. The articles by Kent, Albert, and Price do a good job of fleshing out these assumptions and pointing out the weaknesses and flaws in the probability and decision theory discussions within the Everett framework. ...

See here for my thoughts on this.

The origin of the Everettian heresy (see also Byrne's excellent biography of Everett).

... These efforts gave rise to a lively debate with the Copenhagen group, the existence and content of which have been only recently disclosed by the discovery of unpublished documents. The analysis of such documents opens a window on the conceptual background of Everett’s proposal, and illuminates at the same time some crucial aspects of the Copenhagen view of the measurement problem. Also, it provides an original insight into the interplay between philosophical and social factors which underlay the postwar controversies on the interpretation of quantum mechanics.

... Here is a tentative chronology of the thesis versions and of the related papers:

(1a) Objective vs Subjective probability, short manuscript (first half of 1955).
(1b) Quantitative Measure of Correlation, short manuscript (summer 1955).
(1c) Probability in Wave Mechanics, short manuscript (summer 1955).

(2) Wave Mechanics Without Probability, second version of the dissertation (the long thesis) (winter 1955–1956), published as The Theory of the Universal Wave Function (1973).

(3) On the Foundations of Quantum Mechanics, final dissertation (winter 1956–1957), published as ‘‘Relative State’’ Formulation of Quantum Mechanics (July 1957).

A Commentary on ‘Common SNPs Explain a Large Proportion of the Heritability for Human Height’ by Yang et al. (2010). (Ungated pdf.) Why do Visscher and company have to speak so slowly and enunciate so carefully in order to be understood?

During the refereeing process (the paper was rejected by two other journals before publication in Nature Genetics) and following the publication of Yang et al. (2010) it became clear to us that the methodology we applied, the interpretation of the results and the consequences of the findings on the genetic architecture of human height and that for other traits such as complex disease are not well understood or appreciated ...

Well before reading the Yang et al. paper, but after hearing much about "missing" heritability, I asked impatiently why GWAS researchers had not tried to make a global fit of total heritability, as opposed to searching for individual alleles. See also Heritability 2.0.

Turkheimer on heritability: Still Missing.

A century of familial studies of twins, siblings, parents and children, adoptees, and whole pedigrees has established beyond a shadow of a doubt that genes play a crucial role in the explanation of all human differences, from the medical to the normal, the biological to the behavioral ...

As a social scientist and twin researcher, I had to struggle with the biological and statistical genetics underlying the Yang et al. analyses, but the analysis of variance, the acausal “capturing” and “tracking” of one domain of variance with another came naturally to me. The situation was reversed for the geneticists who were the target audience of the paper: biologically based scientists, accustomed to genes that have an actual causal pathway to their outcomes. Over and above its technical brilliance, the real contribution of the Yang et al. article is to bring into focus this conceptual chasm between biological and quantitative genetics, and thus between the physical sciences and social science. Genomics is only now learning a hard lesson that social scientists had to learn a long time ago: sometimes prediction is just prediction. That is what the missing heritability problem is really about, and why it has not yet been solved

For more Turkheimer, see here. Note that although he emphasizes the difficulty of teasing out causality in a complex system, for some "engineering" applications (such as genetic engineering), prediction may be enough, as long as the correlations between genetic variant and phenotype are confirmed to be robust across a variety of environments. The specific causal mechanism is not as important as the ability to modify and control ;-)

Sunday, January 29, 2012

Looking back

Megan McCardle on her 10 year reunion at Chicago's Booth School of Business.

The Atlantic: ... For my summer 2000 internship at Merrill Lynch, I chose the technology-banking group despite having watched the March 2000 NASDAQ crash from the lobby of Merrill’s auditorium, where we were supposed to be undergoing orientation. Ignoring the helpless, angry flapping of the HR staff, a bunch of us spent the afternoon telling nervous jokes and watching the eerie flicker that billions of dollars give off when they evaporate on live TV.

Predictably, the technology-banking group had almost no work. Also, I was not a good fit with Merrill’s very conservative, very competitive culture. I felt as if I’d decided to intern with a mathematically gifted baboon tribe, and I’m sure they were just as puzzled by me. Unsurprisingly, I didn’t get a full-time offer. Having learned my lesson, I very sensibly turned around and took a full-time job upon graduation at … a technology-strategy consultancy. I got laid off even before the bankers.

And they were laid off in droves, along with the consultants and aspiring dot-com employees; during my first year or two in New York, my recollection is that at least half my classmates there lost their jobs. Ten years later, only a few of the people I spoke with were still where they’d started out.

How could we have failed to notice the danger? You know how: It’s the same reason your cousin bought that 16-room McMansion on an option ARM. Everyone else had been doing it for years, with seemingly stellar results. Why wouldn’t we follow in such successful footsteps?

... Indeed, if Booth is any indication, the complaint that “the best and the brightest” are being siphoned off into consulting and finance is less true today. ... Morton told me that current classes don’t talk as much as mine did about money; they talk about the things they want to make and do.

Of course, these days, there’s also less money to distract them. Financiers are still rich—in 2010, the industry accounted for 5.3 percent of New York’s private-sector jobs, but 23.5 percent of its private-sector wages. But despite all the news about their huge bonuses, they aren’t as rich or as numerous as they used to be. By the end of 2012, New York’s Office of the State Comptroller expects the post-crisis job losses on Wall Street to top 30,000. Finance-related activities used to account for about 20 percent of state tax revenues; they now account for about 13 percent.

Other data back this up. At some point in the weekend—probably after that second round of shots—someone said, “We are the 1 percent!” I pointed out that this is not literally true, since the entry point (as of 2009) for the top 1 percent is $343,927 a year. (In Washington, but not in Chicago, this is what passes for amusing cocktail chatter.)

However, those of us who left finance can take heart, because we are a lot closer to the top 1 percent than we used to be. In 2007, the entry point was $410,096. The top 1 percent’s share of national income has also dropped recently, as the finance professor Steven Kaplan pointed out when I ran into him. In fact, for all the fanfare greeting recent studies by the Congressional Budget Office on rising income inequality from 1979 to 2007, according to Kaplan’s calculations, between 2007 and 2009 the share of adjusted gross income that went to the top 1 percent dropped from 23.5 percent to 17.6 percent—the largest two-year drop since 1928–30.

A few years back, Kaplan and the economist Joshua Rauh compared the incomes of Wall Street executives, “Main Street” executives, and celebrities such as professional athletes. They found that much of the rise in income inequality between 1994 and 2004 was due to the jump in Wall Street incomes: those of investment bankers, venture capitalists, hedge-fund managers, and top securities lawyers—the incomes that so many in my class were chasing.

... WITH EVEN QUITE conservative economists agreeing that the financial sector got too large and too risky, that’s not a bad thing—not even for my classmates. A banker who parachuted into equity research years ago said frankly, “I wish I made more money.” On the other hand, he pointed out, waxing on about the shorter hours and lower stress, “the lifestyle is much, much better.”

My classmates and I might not all have 1 percent–level incomes, but almost everyone seemed to have what Occupy Wall Street says it wants: stable, interesting, well-paying jobs … and a clear future. The few people who are still in finance are the ones who really like it, and are presumably really good at it. And the rest of us are probably better off than if we’d bartered away every waking moment of our 30s.

The most remarkable thing about my business-school reunion was, in fact, how little people talked about money or jobs. They talked about family, friends, the trips they took, and the houses they were turning into homes. According to the behavioral economist Daniel Kahneman, they were talking about what is really important: “It is only a slight exaggeration to say that happiness is the experience of spending time with people you love and who love you.” Now, that’s a universe worth mastering.

Wednesday, January 25, 2012

The Moral Foundation of Economic Behavior

I found this econtalk podcast very interesting. The comments are also good.

We Americans are very lucky to have inherited a high trust society from our forebears. How much longer will it last?

Econtalk: David Rose of the University of Missouri, St. Louis and the author of The Moral Foundation of Economic Behavior talks with EconTalk host Russ Roberts about the book and the role morality plays in prosperity. Rose argues that morality plays a crucial role in prosperity and economic development. Knowing that the people you trade with have a principled aversion to exploiting opportunities for cheating in dealing with others allows economic actors to trust one another. That in turn allows for the widespread specialization and interaction through markets with strangers that creates prosperity. In this conversation, Rose explores the nature of the principles that work best to engender trust. The conversation closes with a discussion of the current trend in morality in America and the implications for trust and prosperity.

See also this Wired article: The Neurobiology of Integrity.

In short, when people didn’t sell out their principles, it wasn’t because the price wasn’t right. It just seemed wrong. “There’s one bucket of things that are utilitarian, and another bucket of categorical things,” Berns said. “If it’s a sacred value to you, then you can’t even conceive of it in a cost-benefit framework.” ...

Whether sacred principles offer utilitarian benefits over long periods of time — many years, perhaps many generations, and at population-wide as well as individual scales — is beyond the current study design, but Berns suspects that one of their benefits is simplicity.

“My hypothesis about the Ten Commandments is that they exist because they’re too hard to think about on a cost-benefit basis,” he said. “It’s far easier to have a rule saying, ‘Thou shalt not commit adultery.’ It simplifies decisionmaking.”

Friday, January 20, 2012

US manufacturing jobs

Good manufacturing jobs that remain in the US will require significant skills, such as the ability to run capital intensive equipment. Unfortunately, most of the population lacks the requisite abilities. This Atlantic article does a good job of contrasting the future prospects of two young workers at a plant that makes fuel injectors. What percentage of the US manufacturing workforce is capable of doing Luke's job, even after (free) retraining?

Making It in America

In the past decade, the flow of goods emerging from U.S. factories has risen by about a third. Factory employment has fallen by roughly the same fraction. The story of Standard Motor Products ... sheds light on both phenomena. It’s a story of hustle, ingenuity, competitive success, and promise for America’s economy. It also illuminates why the jobs crisis will be so difficult to solve.

... Maddie got her job at Standard through both luck and hard work. She was temping for a local agency and was sent to Standard for a three-day job washing walls in early 2011. “People came up to me and said, ‘You have to hire that girl—she is working so hard,’” Tony Scalzitti, the plant manager, told me. Maddie was hired back and assigned to the fuel-injector clean room, where she continued to impress people by working hard, learning quickly, and displaying a good attitude. But, as we’ll see, this may be about as far as hustle and personality can take her. In fact, they may not be enough even to keep her where she is.

... Luke Hutchins is one of Standard’s newest skilled machinists. ... He transferred to Spartanburg Community College hoping to study radiography, like his mother, but that class was full. A friend of a friend told him that you could make more than $30 an hour if you knew how to run factory machines, so he enrolled in the Machine Tool Technology program.

At Spartanburg, he studied math—a lot of math. “I’m very good at math,” he says. “I’m not going to lie to you. I got formulas written down in my head.” He studied algebra, trigonometry, and calculus. “If you know calculus, you definitely can be a machine operator or programmer.” He was quite good at the programming language commonly used in manufacturing machines all over the country, and had a facility for three-dimensional visualization—seeing, in your mind, what’s happening inside the machine—a skill, probably innate, that is required for any great operator.

... When Luke got hired at Standard, he had two years of technical schoolwork and five years of on-the-job experience, and it took one more month of training before he could be trusted alone with the Gildemeisters. All of which is to say that running an advanced, computer-controlled machine is extremely hard.

... Luke says that on a typical shift, he has to adjust the machine about 20 times to keep it on spec. A lot can happen to throw the tolerances off. The most common issue is that the cutting tool gradually wears down. As a result, Luke needs to tell the computer to move the tool a few microns closer, or make some other adjustment. If the operator programs the wrong number, the tool can cut right into the machine itself and destroy equipment worth tens of thousands of dollars.

Luke wants to better understand the properties of cutting tools, he told me, so he can be even more effective. “I’m not one of the geniuses on that. I know a little bit. A lot of people go to school just to learn the properties of tooling.” He also wants to learn more about metallurgy, and he’s especially eager to study industrial electronics. He says he will keep learning for his entire career.

In many ways, Luke personifies the dramatic shift in the U.S. industrial labor market. Before the rise of computer-run machines, factories needed people at every step of production, from the most routine to the most complex. The Gildemeister, for example, automatically performs a series of operations that previously would have required several machines—each with its own operator. It’s relatively easy to train a newcomer to run a simple, single-step machine. Newcomers with no training could start out working the simplest and then gradually learn others. Eventually, with that on-the-job training, some workers could become higher-paid supervisors, overseeing the entire operation. This kind of knowledge could be acquired only on the job; few people went to school to learn how to work in a factory.

... For Maddie to achieve her dreams—to own her own home, to take her family on vacation to the coast, to have enough saved up so her children can go to college—she’d need to become one of the advanced Level 2s. ...

It feels cruel to point out all the Level-2 concepts Maddie doesn’t know, although Maddie is quite open about these shortcomings. She doesn’t know the computer-programming language that runs the machines she operates; in fact, she was surprised to learn they are run by a specialized computer language. She doesn’t know trigonometry or calculus, and she’s never studied the properties of cutting tools or metals. She doesn’t know how to maintain a tolerance of 0.25 microns, or what tolerance means in this context, or what a micron is.

Tony explains that Maddie has a job for two reasons. First, when it comes to making fuel injectors, the company saves money and minimizes product damage by having both the precision and non-precision work done in the same place. Even if Mexican or Chinese workers could do Maddie’s job more cheaply, shipping fragile, half-finished parts to another country for processing would make no sense. Second, Maddie is cheaper than a machine. It would be easy to buy a robotic arm that could take injector bodies and caps from a tray and place them precisely in a laser welder. Yet Standard would have to invest about $100,000 on the arm and a conveyance machine to bring parts to the welder and send them on to the next station. As is common in factories, Standard invests only in machinery that will earn back its cost within two years. For Tony, it’s simple: Maddie makes less in two years than the machine would cost, so her job is safe—for now. If the robotic machines become a little cheaper, or if demand for fuel injectors goes up and Standard starts running three shifts, then investing in those robots might make sense.

“What worries people in factories is electronics, robots,” she tells me. “If you don’t know jack about computers and electronics, then you don’t have anything in this life anymore. One day, they’re not going to need people; the machines will take over. People like me, we’re not going to be around forever.”...

See also this old post Outsourcing vs technological innovation.

Another related article from the NYTimes, this time about Apple and Foxconn. Excellent video.

NYTimes: ... Companies like Apple “say the challenge in setting up U.S. plants is finding a technical work force,” said Martin Schmidt, associate provost at the Massachusetts Institute of Technology. In particular, companies say they need engineers with more than high school, but not necessarily a bachelor’s degree. Americans at that skill level are hard to find, executives contend. “They’re good jobs, but the country doesn’t have enough to feed the demand,” Mr. Schmidt said.

Wednesday, January 18, 2012

Gracie Breakdown: heel hook edition

Gracie Breakdown of UFC 142, leading off with the Rousimar Palhares heel hook finish.

Back in the day when grappling and BJJ were still fringe activities, I often had to travel to strange clubs to find training. It was intimidating to visit a new school where I didn't know anyone, even more so to spar with people who could easily injure me. The one submission I was most afraid of was the heel hook. The two serious injuries I sustained in years of training were from a straight armbar (juji gatame) and a heel hook, which sprained the tendons around my knee. The heel hook is much more effective on the street, where the opponent is likely to be wearing shoes and pants (escaping by pulling the leg out is much harder than in MMA), although there are also reasons not to pull guard in a street fight.

Here's a Palhares highlight video. Beautiful jiujitsu and very dangerous leglocks.

Monday, January 16, 2012

How did East Asians become "yellow"?

I previously recommended the podcast New Books in History, hosted by University of Iowa historian Marshall Poe. I noticed recently that the format has been adopted by professor podcasters in other fields, including Sociology, Philosophy, Policy Studies, Military History, etc. For example, here are the podcasts from New Books in East Asian Studies.

I found the interview with Michael Keevak on his recent book (below) quite interesting. It is amusing that Native Americans are "red", whereas E. Asians are "yellow". Keevak notes that European travelers to Asia before the 18th century never used this characterization. The earliest reference Keevak can find where the terminology is used is in a classification of races of man by Carl Linnaeus.

See earlier post Yellow Peril: 2010 and 1920.

Becoming Yellow: A Short History of Racial Thinking

In their earliest encounters with Asia, Europeans almost uniformly characterized the people of China and Japan as white. This was a means of describing their wealth and sophistication, their willingness to trade with the West, and their presumed capacity to become Christianized. But by the end of the seventeenth century the category of whiteness was reserved for Europeans only. When and how did Asians become "yellow" in the Western imagination? Looking at the history of racial thinking, Becoming Yellow explores the notion of yellowness and shows that this label originated not in early travel texts or objective descriptions, but in the eighteenth- and nineteenth-century scientific discourses on race.

From the walls of an ancient Egyptian tomb, which depicted people of varying skin tones including yellow, to the phrase "yellow peril" at the beginning of the twentieth century in Europe and America, Michael Keevak follows the development of perceptions about race and human difference. He indicates that the conceptual relationship between East Asians and yellow skin did not begin in Chinese culture or Western readings of East Asian cultural symbols, but in anthropological and medical records that described variations in skin color. Eighteenth-century taxonomers such as Carl Linnaeus, as well as Victorian scientists and early anthropologists, assigned colors to all racial groups, and once East Asians were lumped with members of the Mongolian race, they began to be considered yellow.

Demonstrating how a racial distinction took root in Europe and traveled internationally, Becoming Yellow weaves together multiple narratives to tell the complex history of a problematic term.

Michael Keevak is a professor in the Department of Foreign Languages at National Taiwan University.

Sunday, January 15, 2012

Lana Del Rey

From internet sensation to SNL last night.

Call me crazy, but Video Games is a great song and could become part of the indie pop canon -- check out all the covers on YouTube that have appeared just in the last few months. One critic writes: ... the music video "flits between surrendering to romance and depression, moving with the elegant wastefulness of the kind of day drunk that's a true privilege of the beautiful, idle class."

Friday, January 13, 2012

Inside Duke: hurting the ones we love?

This very interesting study had access to comprehensive data ranging from Duke admissions office evaluations of applicants, to students' intended majors and subsequent shifts, to grades awarded and student composition (including abilities!) for each course offered at Duke. Interesting factoid: 40% of fathers of White students at Duke have doctorates.

For similar studies (although not emphasizing ethnicity) using U Oregon data, see Data mining the University , Psychometric thresholds for physics and mathematics.

What Happens After Enrollment? An Analysis of the Time Path of Racial Differences in GPA and Major Choice

Peter Arcidiacono, Esteban M. Aucejo, Ken Spenner

May 24, 2011

If affirmative action results in minority students at elite schools having much potential but weak preparation, then we may expect minority students to start off behind their majority counterparts and then catch up over time. Indeed, at the private university we analyze, the gap between white and black grade point averages falls by half between the students' freshmen and senior year. However, this convergence masks two effects. First, the variance of grades given falls across time. Hence, shrinkage in the level of the gap may not imply shrinkage in the class rank gap. Second, grading standards differ across courses in different majors. We show that controlling for these two features virtually eliminates any convergence of black/white grades. In fact, black/white gpa convergence is symptomatic of dramatic shifts by blacks from initial interest in the natural sciences, engineering, and economics to majors in the humanities and social sciences. We show that natural science, engineering, and economics courses are more difficult, associated with higher study times, and have harsher grading standards; all of which translate into students with weaker academic backgrounds being less likely to choose these majors. Indeed, we show that accounting for academic background can fully account for differences in switching behaviors across blacks and whites.

For a review of Richard Sander's analysis of affirmative action in law school admissions, see here.

The results of all of these studies can be summarized as: to first approximation, psychometric predictors work, and in an unbiased way across ethnicities.

Tuesday, January 10, 2012

James Crow colloquium

[[ Embedded version no longer works, but follow link below to view ]]

Excellent colloquium by James Crow (who passed away recently) emphasizing the importance and ubiquity of additive genetic variance. See earlier post -- the paper linked there covers similar material.

@28 min, Nagylaki came to population genetics after doing his PhD under Feynman at Caltech! Here is his tour de force result, mentioned by Crow in the talk. Interested physicists, see also here.

Monday, January 09, 2012

"Phantom" heritability

The mystery of missing heritability: Genetic interactions create phantom heritability

Or Zuk, Eliana Hechter, Shamil R. Sunyaev, and Eric S. Lander

Human genetics has been haunted by the mystery of “missing heritability” of common traits. Although studies have discovered >1,200 variants associated with common diseases and traits, these variants typically appear to explain only a minority of the heritability. The proportion of heritability explained by a set of variants is the ratio of (i) the heritability due to these variants (numerator), estimated directly from their observed effects, to (ii) the total heritability (denominator), inferred indirectly from population data. The prevailing view has been that the explanation for missing heritability lies in the numerator—that is, in as-yet undiscovered variants. While many variants surely remain to be found, we show here that a substantial portion of missing heritability could arise from overestimation of the denominator, creating “phantom heritability.” Specifically, (i) estimates of total heritability implicitly assume the trait involves no genetic interactions (epistasis) among loci; (ii) this assumption is not justified, because models with interactions are also consistent with observable data; and (iii) under such models, the total heritability may be much smaller and thus the proportion of heritability explained much larger. For example, 80% of the currently missing heritability for Crohn's disease could be due to genetic interactions, if the disease involves interaction among three pathways. In short, missing heritability need not directly correspond to missing variants, because current estimates of total heritability may be significantly inflated by genetic interactions. Finally, we describe a method for estimating heritability from isolated populations that is not inflated by genetic interactions.

This new paper by Eric Lander and collaborators is attracting a fair amount of interest: gnxp , genetic inference , genomes unzipped. The paper is discussed at some length at the links above. I will just make a few comments.

1. The non-additive models analyzed in the paper require significant shared environment correlations to mask non-additivity and be consistent with data that (at face value) support additivity. See Table 7 in the Supplement. This level of environmental effect is, in the cases of height and g, probably excluded by adoption studies, although it may still be allowed for many disease traits. To put this another way, even after reading this paper I do not know of any models consistent with what is known about height and g that do not have a large additive component (e.g., of order 50 percent of total variance).

2. The criticisms in section 11 of Hill, Goddard, and Visscher (2008; also discussed previously here) are, to my mind, rather weak. To quote a string theorist friend: "It is nothing more than the calculus of words" ;-) In particular, I flat out disagree with the following (p.46 of the Supplement):

The problem with this reasoning is: As the population grows (and the typical locus tends toward monomorphism), typical traits involving typical loci become very boring! They not only have low interaction variance VAA, they also have very low total genetic variance VG. That is, the typical trait doesn't vary much in the population! In effect, Hill et al.'s theory thus actually describes what happens for rare traits caused by a few rare variants. Not surprisingly, interactions account for a small proportion of the variance for such traits.

[[ Nope, one could also take the rarity to zero and the number of causal variants to infinity keeping the population variance held fixed! This seems to be what happens in the real world with quantitative traits like height and g having thousands of causal variants, each of small effect. ]]

"Doesn't vary very much" is not well-defined: relative to what? What if the genetic variance in this limit is still much larger than the environmental component? Do height and IQ "vary very much" in human populations? Having only moderately rare variants (e.g., MAF = .1-.2), but many of them, is consistent with normally distributed population variation and small non-additive effects (.2 squared is 4 percent). Below is figure 9 from the Supplement -- click for larger version. As the frequency p approaches zero (or unity) the additive variance (green curve) dominates and the non-additive part becomes small (blue curve). Whether the total genetic variance (red curve) is big or small might be defined relative to the size of environmental effects, which are not shown. Note the green and blue curves are dimensionless ratios of variances, whereas the red curve ultimately (after multiplication by effect size) has real units like cm of height or IQ points.

The essence of Hill et al. is discussed in the earlier post (see comments).

Yes, one of the main points of the paper I cited is that one can have strong epistasis at the level of individual genes, but if variants are rare, the effect in a population will be linear.

"These two examples, the single locus and A x A model, illustrate what turns out to be the fundamental point in considering the impact of the gene frequency distribution. When an allele (say C) is rare, so most individuals have genotype Cc or cc, the allelic substitution or average effect of C vs. c accounts for essentially all the differences found in genotypic values; or in other words the linear regression of genotypic value on number of C genes accounts for the genotypic differences (see [3], p 117)." [p.5]

Note Added: For more on additivity vs epistasis, I suggest this talk by James Crow. Among other things he makes an evolutionary argument for why we should expect to find lots of additive variation at the population or individual level, despite the presence of lots of epistasis at the gene level. It is much more difficult for evolution to act on non-additive variance than on additive variance in a sexually reproducing species.

Thursday, January 05, 2012

Eric Lander profile

The NYTimes ran a nice profile of Broad Institute director Eric Lander a few days ago.

Interestingly, the article emphasizes several factors aside from his formidable intellect that are responsible for his success as a scientist: curiosity, openness, extraversion, ambition, drive ...

NYTimes: ... He was so good that he was chosen for the American team in the 1974 Mathematics Olympiad. To prepare, the team spent a summer training at Rutgers University in New Brunswick, N.J.

This was the first time the United States had entered the competition, and the coaches were afraid the team would be decimated by entrants from Communist countries. (Indeed, the Soviet Union placed first, but the Americans came in second, just ahead of Hungary, which was known for its mathematics talent.)

Dr. Zeitz was Dr. Lander’s roommate that summer. The two recall being the only team members who did not come from affluent suburban families, and the only ones who did not have fathers. But Eric stood out for other reasons.

“He was outgoing,” Dr. Zeitz recalled. “He was, compared to the rest of us, definitely more ambitious. He was enthusiastic about everything. And he had a real charisma.” Team members decided that Dr. Lander was the only one among them whom they could imagine becoming a United States senator one day.

At first, though, it looked as if the young mathematician would follow a traditional academic path. He went to Princeton, majoring in mathematics but also indulging a passion for writing. He took a course in narrative nonfiction with the author John McPhee and wrote for the campus newspaper.

He graduated as valedictorian at age 20, won a Rhodes scholarship, went to Oxford and earned a mathematics Ph.D. there in record time — two years. Yet he was unsettled by the idea of spending the rest of his life as a mathematician.

“I began to appreciate that the career of mathematics is rather monastic,” Dr. Lander said. “Even though mathematics was beautiful and I loved it, I wasn’t a very good monk.” He craved a more social environment, more interactions.

“I found an old professor of mine and said, ‘What can I do that makes some use of my talents?’ ” He ended up at Harvard Business School, teaching managerial economics.

He had never studied the subject, he confesses, but taught himself as he went along. “I learned it faster than the students did,” Dr. Lander said.

Yet at 23, he was growing restless, craving something more challenging. Managerial economics, he recalled, “wasn’t deep enough.”

He spoke to his brother, Arthur, a neurobiologist, who sent him mathematical models of how the cerebellum worked. The models “seemed hokey,” Dr. Lander said, “but the brain was interesting.”

His appetite for biology whetted, he began hanging around a fruit-fly genetics lab at Harvard. A few years later, he talked the business school into giving him a leave of absence.

He told Harvard he would go to M.I.T., probably to learn about artificial intelligence. Instead, he ended up spending his time in Robert Horvitz’s worm genetics lab. And that led to the spark that changed his life. ...

Tuesday, January 03, 2012

Mathematical minds

A colleague recommended this beautifully written Quora answer concerning the nature of mathematical thinking. I recommend reading it in its entirety.

The particularly "abstract" or "technical" parts of many other subjects seem quite accessible because they boil down to maths you already know. You generally feel confident about your ability to learn most quantitative ideas and techniques. A theoretical physicist friend likes to say, only partly in jest, that there should be books titled "______ for Mathematicians", where _____ is something generally believed to be difficult (quantum chemistry, general relativity, securities pricing, formal epistemology). Those books would be short and pithy, because many key concepts in those subjects are ones that mathematicians are well equipped to understand. Often, those parts can be explained more briefly and elegantly than they usually are if the explanation can assume a knowledge of maths and a facility with abstraction.

Learning the domain-specific elements of a different field can still be hard -- for instance, physical intuition and economic intuition seem to rely on tricks of the brain that are not learned through mathematical training alone. But the quantitative and logical techniques you sharpen as a mathematician allow you to take many shortcuts that make learning other fields easier, as long as you are willing to be humble and modify those mathematical habits that are not useful in the new field.

You move easily between multiple seemingly very different ways of representing a problem. For example, most problems and concepts have more algebraic representations (closer in spirit to an algorithm) and more geometric ones (closer in spirit to a picture). You go back and forth between them naturally, using whichever one is more helpful at the moment. ...

Spoiled by the power of your best tools, you tend to shy away from messy calculations or long, case-by-case arguments unless they are absolutely unavoidable. Mathematicians develop a powerful attachment to elegance and depth, which are in tension with, if not directly opposed to, mechanical calculation. Mathematicians will often spend days thinking of a clean argument that completely avoids numbers and strings of elementary deductions in favor of seeing why what they want to show follows easily from some very deep and general pattern that is already well-understood. Indeed, you tend to choose problems motivated by how likely it is that there will be some "clean" insight in them, as opposed to a detailed but ultimately unenlightening proof by exhaustively enumerating a bunch of possibilities. In A Mathematician's Apology [http://www.math.ualberta.ca/~mss..., the most poetic book I know on what it is "like" to be a mathematician], G.H. Hardy wrote:

"In both [these example] theorems (and in the theorems, of course, I include the proofs) there is a very high degree of unexpectedness, combined with inevitability and economy. The arguments take so odd and surprising a form; the weapons used seem so childishly simple when compared with the far-reaching results; but there is no escape from the conclusions. There are no complications of detail—one line of attack is enough in each case; and this is true too of the proofs of many much more difficult theorems, the full appreciation of which demands quite a high degree of technical proficiency. We do not want many ‘variations’ in the proof of a mathematical theorem: ‘enumeration of cases’, indeed, is one of the duller forms of mathematical argument. A mathematical proof should resemble a simple and clear-cut constellation, not a scattered cluster in the Milky Way." ...

You are good at generating your own questions and your own clues in thinking about some new kind of abstraction. One of the things I've reliably heard from people who know parts of mathematics well but never went on to be professional mathematicians (i.e., write articles about new mathematics for a living) is that they were good at proving difficult propositions that were stated in a textbook exercise, but would be lost if presented with a mathematical structure and asked to find and prove some "interesting" facts about it. ...

Concretely, this amounts to being good at making definitions and formulating precise conjectures using the newly defined concepts that other mathematicians find interesting. One of the things one learns fairly late in a typical mathematics education (often only at the stage of starting to do research) is how to make good, useful definitions. ...

Sunday, January 01, 2012

Genomic prediction

This recent paper gives a sense of the current state of the art in quantitative genetics. Height is one of the easiest phenotypes to measure, so almost every medical (disease) GWAS provides some additional data -- IIRC, about 200k pheno/geno-type pairs are available for analysis. With a few hundred associated variants detected (depending on how one defines the discovery threshold), one can start to construct predictors like the Weighted Allele Score (WAS) shown below (which is essentially the breeding value from population genetics). See related posts here, here and here.

It is interesting to think about what a similar figure would look like once loci accounting for 50% or 80% of total variance have been identified. (The current value is about 10%.) I would guess this will happen within 5-10 years (approx. 10^7 individuals of known height genotyped).

Common Variants Show Predicted Polygenic Effects on Height in the Tails of the Distribution, Except in Extremely Short Individuals

PLoS Genet 7(12): e1002439. doi:10.1371/journal.pgen.1002439

Abstract: Common genetic variants have been shown to explain a fraction of the inherited variation for many common diseases and quantitative traits, including height, a classic polygenic trait. The extent to which common variation determines the phenotype of highly heritable traits such as height is uncertain, as is the extent to which common variation is relevant to individuals with more extreme phenotypes. To address these questions, we studied 1,214 individuals from the top and bottom extremes of the height distribution (tallest and shortest ,1.5%), drawn from ,78,000 individuals from the HUNT and FINRISK cohorts. We found that common variants still influence height at the extremes of the distribution: common variants (49/141) were nominally associated with height in the expected direction more often than is expected by chance (p,5610228), and the odds ratios in the extreme samples were consistent with the effects estimated previously in population-based data. To examine more closely whether the common variants have the expected effects, we calculated a weighted allele score (WAS), which is a weighted prediction of height for each individual based on the previously estimated effect sizes of the common variants in the overall population. The average WAS is consistent with expectation in the tall individuals, but was not as extreme as expected in the shortest individuals (p,0.006), indicating that some of the short stature is explained by factors other than common genetic variation. The discrepancy was more pronounced (p,1026) in the most extreme individuals (height,0.25 percentile). The results at the extreme short tails are consistent with a large number of models incorporating either rare genetic non-additive or rare non-genetic factors that decrease height. We conclude that common genetic variants are associated with height at the extremes as well as across the population, but that additional factors become more prominent at the shorter extreme.

Blog Archive