Information Processing

Pessimism of the Intellect, Optimism of the Will     Archive   Favorite posts   Twitter: @steve_hsu

Thursday, March 05, 2015

Garbage, Junk, and non-coding DNA

About 1% of the genome codes for actual proteins: these regions are the ~20k or so "genes" that receive most of the attention. (Usage of the term "gene" seems to be somewhat inconsistent, sometimes meaning "unit of heredity" or "coding region" or "functional region" ...) There's certainly much more biologically important information in the genome that just the coding regions, but the question is how much? One of the researchers below estimates that 8% is functional, but it could be much more.

See also Adaptive evolution and non-coding regions.
NYTimes: Is Most of Our DNA Garbage?

... Rinn studies RNA, but not the RNA that our cells use as a template for making proteins. Scientists have long known that the human genome contains some genes for other types of RNA: strands of bases that carry out other jobs in the cell, like helping to weld together the building blocks of proteins. In the early 2000s, Rinn and other scientists discovered that human cells were reading thousands of segments of their DNA, not just the coding parts, and producing RNA molecules in the process. They wondered whether these RNA molecules could be serving some vital function.

... In December 2013, Rinn and his colleagues published the first results of their search: three potential new genes for RNA that appear to be essential for a mouse’s survival. To investigate each potential gene, the scientists removed one of the two copies in mice. When the mice mated, some of their embryos ended up with two copies of the gene, some with one and some with none. If these mice lacked any of these three pieces of DNA, they died in utero or shortly after birth. “You take away a piece of junk DNA, and the mouse dies,” Rinn said. “If you can come up with a criticism of that, go ahead. But I’m pretty satisfied. I’ve found a new piece of the genome that’s required for life.”

... To some biologists, discoveries like Rinn’s hint at a hidden treasure house in our genome. Because a few of these RNA molecules have turned out to be so crucial, they think, the rest of the noncoding genome must be crammed with riches. But to Gregory and others, that is a blinkered optimism worthy of Dr. Pangloss. They, by contrast, are deeply pessimistic about where this research will lead. Most of the RNA molecules that our cells make will probably not turn out to perform the sort of essential functions that hotair and firre do. Instead, they are nothing more than what happens when RNA-making proteins bump into junk DNA from time to time.

... One news release from an N.I.H. project declared, “Much of what has been called ‘junk DNA’ in the human genome is actually a massive control panel with millions of switches regulating the activity of our genes.” Researchers like Gregory consider this sort of rhetoric to be leaping far beyond the actual evidence.

... Over millions of years, essential genes haven’t changed very much, while junk DNA has picked up many harmless mutations. Scientists at the University of Oxford have measured evolutionary change over the past 100 million years at every spot in the human genome. “I can today say, hand on my heart, that 8 percent, plus or minus 1 percent, is what I would consider functional,” Chris Ponting, an author of the study, says. And the other 92 percent? “It doesn’t seem to matter that much,” he says. ...

Wednesday, March 04, 2015

Short stories

Yesterday I listened to this interview with the fiction editor of the New Yorker:



Deborah Treisman, fiction editor at The New Yorker, discusses the magazine's 90th anniversary and the canon of fiction it published.

She didn't mention Irwin Shaw's 1939 classic The Girls in Their Summer Dresses. According to James Salter, Shaw wrote it in a single morning.
... "I like the girls in the offices. Neat, with their eyeglasses, smart, chipper, knowing what everything is about, taking care of themselves all the time." He kept his eye on the people going slowly past outside the window. "I like the girls on Forty-fourth Street at lunchtime, the actresses, all dressed up on nothing a week, talking to the good-looking boys, wearing themselves out being young and vivacious outside Sardi's, waiting for producers to look at them. I like the salesgirls in Macy's, paying attention to you first because you're a man, leaving lady customers waiting, flirting with you over socks and books and phonograph needles. I got all this stuff accumulated in me because I've been thinking about it for ten years and now you've asked for it and here it is."

"Go ahead," Frances said.

"When I think of New York City, I think of all the girls, the Jewish girls, the Italian girls, the Irish, Polack, Chinese, German, Negro, Spanish, Russian girls, all on parade in the city. I don't know whether it's something special with me or whether every man in the city walks around with the same feeling inside him, but I feel as though I'm at a picnic in this city. I like to sit near the women in the theaters, the famous beauties who've taken six hours to get ready and look it. And the young girls at the football games, with the red cheeks, and when the warm weather comes, the girls in their summer dresses . . ." He finished his drink. "That's the story. You asked for it, remember. I can't help but look at them. I can't help but want them." ...
Irwin Shaw is largely forgotten now, despite having been a giant in his own time. He was a hero to the young Salter when the two first met in Paris. They stayed friends until the end.
Burning the Days: ... in the winter of his life ... the overarching trees were letting their leaves fall, the large world he knew was closing. Was he going to write these things down? No, he said without hesitation. "Who cares?"

He wanted immortality of course. "What else is there?" Life passes into pages if it passes into anything, and his had been written. ...

From the Paris Review:
I wrote “The Girls in Their Summer Dresses” one morning while Marian was lying in bed and reading. And I knew I had something good there, but I didn’t want her to read it, knowing that the reaction would be violent, to say the least, because it’s about a man who tells his wife that he’s going to be unfaithful to her. So I turned it facedown, and I said, “Don’t read this yet. It’s not ready.” It was the only copy I had. Then I went out and took a walk, had a drink, and came back. She was raging around the room. She said, “It’s a lucky thing you came back just now, because I was going to open the window and throw it out.” Since then she’s become reconciled to it, and I think she reads it with pleasure, too.

Thursday, February 26, 2015

Second-generation PLINK

"... these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM"  :-)

Interview with author Chris Chang. User Google group.

If one estimates a user population of ~1000, each saving of order $1000 in CPU/work time per year, then in the next few years PLINK 1.9 and its successors will deliver millions of dollars in value to the scientific community.
Second-generation PLINK: rising to the challenge of larger and richer datasets

Background
PLINK 1 is a widely used open-source C/C++ toolset for genome-wide association studies (GWAS) and research in population genetics. However, the steady accumulation of data from imputation and whole-genome sequencing studies has exposed a strong need for faster and scalable implementations of key functions, such as logistic regression, linkage disequilibrium estimation, and genomic distance evaluation. In addition, GWAS and population-genetic data now frequently contain genotype likelihoods, phase information, and/or multiallelic variants, none of which can be represented by PLINK 1’s primary data format.

Findings
To address these issues, we are developing a second-generation codebase for PLINK. The first major release from this codebase, PLINK 1.9, introduces extensive use of bit-level parallelism, O(n‾√)-time/constant-space Hardy-Weinberg equilibrium and Fisher’s exact tests, and many other algorithmic improvements. In combination, these changes accelerate most operations by 1-4 orders of magnitude, and allow the program to handle datasets too large to fit in RAM. We have also developed an extension to the data format which adds low-overhead support for genotype likelihoods, phase, multiallelic variants, and reference vs. alternate alleles, which is the basis of our planned second release (PLINK 2.0).

Conclusions
The second-generation versions of PLINK will offer dramatic improvements in performance and compatibility. For the first time, users without access to high-end computing resources can perform several essential analyses of the feature-rich and very large genetic datasets coming into use.

Evidence for polygenicity in GWAS

This paper describes a method to distinguish between polygenic causality and confounding (e.g., from population structure) in GWAS.

LD Score regression distinguishes confounding from polygenicity in genome-wide association studies

Nature Genetics 47, 291–295 (2015) doi:10.1038/ng.3211

Both polygenicity (many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from a true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of the inflation in test statistics in many GWAS of large sample size.
The basic idea is straightforward, however the technique yields good evidence for polygenicity.
Variants in LD with a causal variant show an elevation in test statistics in association analysis proportional to their LD (measured by r2) with the causal variant1–3. The more genetic variation an index variant tags, the higher the probability that this index variant will tag a causal variant. In contrast, inflation from cryptic relatedness within or between cohorts4–6 or population stratification purely from genetic drift will not correlate with LD.

...

Real data

Finally, we applied LD Score regression to summary statistics from GWAS representing more than 20 different phenotypes15–32 (Table 1 and Supplementary Fig. 8a–w; metadata about the studies in the analysis are presented in Supplementary Table 8a,b). For all studies, the slope of the LD Score regression was significantly greater than zero and the LD Score regression intercept was substantially less than λGC (mean difference of 0.11), suggesting that polygenicity accounts for a majority of the increase in the mean χ2 statistic and confirming that correcting test statistics by dividing by λGC is unnecessarily conservative. As an example, we show the LD Score regression for the most recent schizophrenia GWAS, restricted to ~70,000 European-ancestry individuals (Fig. 2)32. The low intercept of 1.07 indicates at most a small contribution of bias and that the mean χ2 statistic of 1.613 results mostly from polygenicity.
Figures from the Supplement. "Years of Education" refers to the SSGAC study which identified the first SNPs associated with cognitive ability. See First Hits for Cognitive Ability, and more posts here.

Monday, February 23, 2015

Back to the deep



The Chronicle has a nice profile of Geoffrey Hinton, which details some of the history behind neural nets and deep learning. See also Neural networks and deep learning and its sequel.

The recent flourishing of deep neural nets is not primarily due to theoretical advances, but rather the appearance of GPUs and large training data sets.
Chronicle: ... Hinton has always bucked authority, so it might not be surprising that, in the early 1980s, he found a home as a postdoc in California, under the guidance of two psychologists, David E. Rumelhart and James L. McClelland, at the University of California at San Diego. "In California," Hinton says, "they had the view that there could be more than one idea that was interesting." Hinton, in turn, gave them a uniquely computational mind. "We thought Geoff was remarkably insightful," McClelland says. "He would say things that would open vast new worlds."

They held weekly meetings in a snug conference room, coffee percolating at the back, to find a way of training their error correction back through multiple layers. Francis Crick, who co-discovered DNA’s structure, heard about their work and insisted on attending, his tall frame dominating the room even as he sat on a low-slung couch. "I thought of him like the fish in The Cat in the Hat," McClelland says, lecturing them about whether their ideas were biologically plausible.

The group was too hung up on biology, Hinton said. So what if neurons couldn’t send signals backward? They couldn’t slavishly recreate the brain. This was a math problem, he said, what’s known as getting the gradient of a loss function. They realized that their neurons couldn’t be on-off switches. If you picture the calculus of the network like a desert landscape, their neurons were like drops off a sheer cliff; traffic went only one way. If they treated them like a more gentle mesa—a sigmoidal function—then the neurons would still mostly act as a threshold, but information could climb back up.

...

A decade ago, Hinton, LeCun, and Bengio conspired to bring them back. Neural nets had a particular advantage compared with their peers: While they could be trained to recognize new objects—supervised learning, as it’s called—they should also be able to identify patterns on their own, much like a child, if left alone, would figure out the difference between a sphere and a cube before its parent says, "This is a cube." If they could get unsupervised learning to work, the researchers thought, everyone would come back. By 2006, Hinton had a paper out on "deep belief networks," which could run many layers deep and learn rudimentary features on their own, improved by training only near the end. They started calling these artificial neural networks by a new name: "deep learning." The rebrand was on.

Before they won over the world, however, the world came back to them. That same year, a different type of computer chip, the graphics processing unit, became more powerful, and Hinton’s students found it to be perfect for the punishing demands of deep learning. Neural nets got 30 times faster overnight. Google and Facebook began to pile up hoards of data about their users, and it became easier to run programs across a huge web of computers. One of Hinton’s students interned at Google and imported Hinton’s speech recognition into its system. It was an instant success, outperforming voice-recognition algorithms that had been tweaked for decades. Google began moving all its Android phones over to Hinton’s software.

It was a stunning result. These neural nets were little different from what existed in the 1980s. This was simple supervised learning. It didn’t even require Hinton’s 2006 breakthrough. It just turned out that no other algorithm scaled up like these nets. "Retrospectively, it was a just a question of the amount of data and the amount of computations," Hinton says. ...

Friday, February 20, 2015

Coding for kids

I've been trying to get my kids interested in coding. I found this nice game called Lightbot, in which one writes simple programs that control the discrete movements of a bot. It's very intuitive and in just one morning my kids learned quite a bit about the idea of an algorithm and the notion of a subroutine or loop. Some of the problems (e.g., involving nested loops) are challenging.

Browser (Flash?) version.

There are Android and iOS versions as well.



Other coding for kids recommendations?

STEM, Gender, and Leaky Pipelines

Some interesting longitudinal results on female persistence through graduate school in STEM. Post-PhD there could still be a problem, but apparently this varies strongly by discipline. These results suggest that, overall, it is undergraduate representation that will determine the future gender ratio of the STEM professoriate.
The bachelor’s to Ph.D. STEM pipeline no longer leaks more women than men: a 30-year analysis

(Front. Psychol., 17 February 2015 | doi: 10.3389/fpsyg.2015.00037)

D. Miller and J. Wai

For decades, research and public discourse about gender and science have often assumed that women are more likely than men to “leak” from the science pipeline at multiple points after entering college. We used retrospective longitudinal methods to investigate how accurately this “leaky pipeline” metaphor has described the bachelor’s to Ph.D. transition in science, technology, engineering, and mathematics (STEM) fields in the U.S. since the 1970s. Among STEM bachelor’s degree earners in the 1970s and 1980s, women were less likely than men to later earn a STEM Ph.D. However, this gender difference closed in the 1990s. Qualitatively similar trends were found across STEM disciplines. The leaky pipeline metaphor therefore partially explains historical gender differences in the U.S., but no longer describes current gender differences in the bachelor’s to Ph.D. transition in STEM. The results help constrain theories about women’s underrepresentation in STEM. Overall, these results point to the need to understand gender differences at the bachelor’s level and below to understand women’s representation in STEM at the Ph.D. level and above. Consistent with trends at the bachelor’s level, women’s representation at the Ph.D. level has been recently declining for the first time in over 40 years.

... However, as reviewed earlier, the post-Ph.D. academic pipeline leaks more women than men only in some STEM fields such as life science, but surprisingly not the more male-dominated fields of physical science and engineering (Ceci et al., 2014). ...

Conclusion: Overall, these results and supporting literature point to the need to understand gender differences at the bachelor’s level and below to understand women’s representation in STEM at the Ph.D. level and above. Women’s representation in computer science, engineering, and physical science (pSTEM) fields has been decreasing at the bachelor’s level during the past decade. Our analyses indicate that women’s representation at the Ph.D. level is starting to follow suit by declining for the first time in over 40 years (Figure 2). This recent decline may also cause women’s gains at the assistant professor level and beyond to also slow down or reverse in the next few years. Fortunately, however, pathways for entering STEM are considerably diverse at the bachelor’s level and below. For instance, our prior research indicates that undergraduates who join STEM from a non-STEM field can substantially help the U.S. meet needs for more well-trained STEM graduates (Miller et al., under review). Addressing gender differences at the bachelor’s level could have potent effects at the Ph.D. level, especially now that women and men are equally likely to later earn STEM Ph.D.’s after the bachelor’s.

Tuesday, February 17, 2015

CBO Against Piketty?


This report using CBO  (Congressional Budget Office) data claims that income inequality did not widen during the Great Recession (table above compares 2007 to 2011). After government transfer payments (taxes, entitlements, etc.) are taken into account, one finds that low income groups were cushioned, while high earners saw significant declines in income.
... The CBO on the other hand defines income broadly as resources consumed by households, whether through cash payments or services rendered without payments.2 Its definition of market income includes employer payments on workers (Social Security, Medicare, medical insurance, and retirement) and capital gains. On top of market income, CBO next adds all public cash assistance and in-kind benefits from social insurance and government assistance programs to arrive at “before-tax income.” Finally, the CBO’s last step is to subtract all federal taxes including personal income taxes, Social Security payments, excise taxes and corporate income taxes to arrive at “after-tax income” or what other government series call disposable income.3 ...


CONCLUSION: It is now widely held that inequality increased dramatically in the decades prior to 2007. For example, Piketty and Saez’s research shows that 91 percent of economic growth between 1979 and 2007 went to the wealthiest 10 percent. But when comparing the CBO’s more comprehensive definition of income (including employer benefits, Social Security, Medicare, and other government benefits), 47 percent of growth of after-tax income went to the richest 10 percent.14

Consequently, both methodologies reveal a real income inequality problem.15 But this paper once again shows that the IRS data give a misleading impression of what has happened with income inequality (not growing as fast in the period from 1979 to 2007 and decreasing, not increasing in the years after 2007). While many on the left were unhappy with the first ITIF paper and my earlier work criticizing Piketty and Saez, it is less clear how they will react to this paper.16 On the one hand, the paper argues that inequality doesn’t always rise and that it didn’t since the onset of the Great Recession. On the other hand, it argues for the efficacy of robust income-support and growth policies and ultimately provides a refutation to a critique that Republicans have made of President Obama.

Almost no increase in US Gini coefficient since 1979 once transfer payments are accounted for:



Is it possible that nameless government employees at CBO have done a better job on this problem than the acclaimed economists Piketty and Saez? (What kind of serious statistical researcher uses Excel?!?)

See also Piketty on Capital and Piketty's Capital.

Saturday, February 14, 2015

Hierarchies in faculty hiring networks

Short summary: top academic departments produce a disproportionate fraction of all faculty. The paper below finds that only 9 to 14% of faculty are placed at institutions more prestigious than their doctorate ... the top 10 units produce 1.6 to 3.0 times more faculty than the second 10, and 2.3 to 5.6 times more than the third 10.

For related data in theoretical high energy physics, string theory and cosmology, see Survivor: Theoretical Physics.
Systematic inequality and hierarchy in faculty hiring networks

Science Advances 01 Feb 2015: Vol. 1 no. 1 e1400005 DOI: 10.1126/sciadv.1400005

The faculty job market plays a fundamental role in shaping research priorities, educational outcomes, and career trajectories among scientists and institutions. However, a quantitative understanding of faculty hiring as a system is lacking. Using a simple technique to extract the institutional prestige ranking that best explains an observed faculty hiring network—who hires whose graduates as faculty—we present and analyze comprehensive placement data on nearly 19,000 regular faculty in three disparate disciplines. Across disciplines, we find that faculty hiring follows a common and steeply hierarchical structure that reflects profound social inequality. Furthermore, doctoral prestige alone better predicts ultimate placement than a U.S. News & World Report rank, women generally place worse than men, and increased institutional prestige leads to increased faculty production, better faculty placement, and a more influential position within the discipline. These results advance our ability to quantify the influence of prestige in academia and shed new light on the academic system.
From the article:
... Across the sampled disciplines, we find that faculty production (number of faculty placed) is highly skewed, with only 25% of institutions producing 71 to 86% of all tenure-track faculty ...

... Strong inequality holds even among the top faculty producers: the top 10 units produce 1.6 to 3.0 times more faculty than the second 10, and 2.3 to 5.6 times more than the third 10.

[ Figures at bottom show top 60 ranked departments according to algorithm defined below ]

... Within faculty hiring networks, each vertex represents an institution, and each directed edge (u,v) represents a faculty member at v who received his or her doctorate from u. A prestige hierarchy is then a ranking π of vertices, where πu = 1 is the highest-ranked vertex. The hierarchy’s strength is given by ρ, the fraction of edges that point downward, that is, πu ≤ πv, maximized over all rankings (14). Equivalently, ρ is the rate at which faculty place no better in the hierarchy than their doctorate. When ρ = 1/2, faculty move up or down the hierarchy at equal rates, regardless of where they originate, whereas ρ = 1 indicates a perfect social hierarchy.

Both the inferred hierarchy π and its strength ρ are of interest. For large networks, there are typically many equally plausible rankings with the maximum ρ (15). To extract a consensus ranking, we sample optimal rankings by repeatedly choosing a random pair of vertices and swapping their ranks, if the resulting ρ is no smaller than for the current ranking. We then combine the sampled rankings with maximal ρ into a single prestige hierarchy by assigning each institution u a score equal to its average rank within the sampled set, and the order of these scores gives the consensus ranking (see the Supplementary Materials). The distribution of ranks within this set for some u provides a natural measure of rank uncertainty.

Across disciplines, we find steep prestige hierarchies, in which only 9 to 14% of faculty are placed at institutions more prestigious than their doctorate (ρ = 0.86 to 0.91). Furthermore, the extracted hierarchies are 19 to 33% stronger than expected from the observed inequality in faculty production rates alone (Monte Carlo, P < 10−5; see Supplementary Materials), indicating a specific and significant preference for hiring faculty with prestigious doctorates.

Click for larger images.



Wednesday, February 11, 2015

Perils of Prediction

Highly recommended podcast: Tim Harford (FT) at the LSE. Among the topics covered are Keynes' and Irving Fisher's performance as investors, and Philip Tetlock's IARPA-sponsored Good Judgement Project, meant to evaluate expert prediction of complex events. Project researchers (psychologists) find that "actively open-minded thinkers" (those who are willing to learn from those that disagree with them) perform best. Unfortunately there are no real "super-predictors" -- just some who are better than others, and have better calibration (accurate confidence estimates).


A Brief History of Humankind

I wonder whether Yuval Harari is related to the physicist Haim Harari.
Yuval Noah Harari discusses his new book, Sapiens: A Brief History of Humankind, which explores the ways in which biology and history have defined us and enhanced our understanding of what it means to be human. One hundred thousand years ago, at least six different species of humans inhabited Earth. Yet today there is only one—homo sapiens. What happened to the others? And what may happen to us?





See also his Coursera MOOC: A Brief History of Humankind.
About 2 million years ago our human ancestors were insignificant animals living in a corner of Africa. Their impact on the world was no greater than that of gorillas, zebras, or chickens. Today humans are spread all over the world, and they are the most important animal around. The very future of life on Earth depends on the ideas and behavior of our species.

This course will explain how we humans have conquered planet Earth, and how we have changed our environment, our societies, and our own bodies and minds. The aim of the course is to give students a brief but complete overview of history, from the Stone Age to the age of capitalism and genetic engineering. The course invites us to question the basic narratives of our world. Its conclusions are enlightening and at times provocative. For example:

· We rule the world because we are the only animal that can believe in things that exist purely in our own imagination, such as gods, states, money and human rights.

· Humans are ecological serial killers – even with stone-age tools, our ancestors wiped out half the planet's large terrestrial mammals well before the advent of agriculture.

· The Agricultural Revolution was history’s biggest fraud – wheat domesticated Sapiens rather than the other way around.

· Money is the most universal and pluralistic system of mutual trust ever devised. Money is the only thing everyone trusts.

· Empire is the most successful political system humans have invented, and our present era of anti-imperial sentiment is probably a short-lived aberration.

· Capitalism is a religion rather than just an economic theory – and it is the most successful religion to date.

· The treatment of animals in modern agriculture may turn out to be the worst crime in history.

· We are far more powerful than our ancestors, but we aren’t much happier.

· Humans will soon disappear. With the help of novel technologies, within a few centuries or even decades, Humans will upgrade themselves into completely different beings, enjoying godlike qualities and abilities. History began when humans invented gods – and will end when humans become gods.

Monday, February 09, 2015

Multiallelic copy number variation


These new results probe surprisingly large variation in copy number (duplicated genomic segments) and its impact on gene expression. Earlier posts involving CNVs.
Large multiallelic copy number variations in humans
Nature Genetics (2015) doi:10.1038/ng.3200

Thousands of genomic segments appear to be present in widely varying copy numbers in different human genomes. We developed ways to use increasingly abundant whole-genome sequence data to identify the copy numbers, alleles and haplotypes present at most large multiallelic CNVs (mCNVs). We analyzed 849 genomes sequenced by the 1000 Genomes Project to identify most large (>5-kb) mCNVs, including 3878 duplications, of which 1356 appear to have 3 or more segregating alleles. We find that mCNVs give rise to most human variation in gene dosage—seven times the combined contribution of deletions and biallelic duplications— and that this variation in gene dosage generates abundant variation in gene expression. We describe ‘runaway duplication haplotypes’ in which genes, including HPR and ORM1, have mutated to high copy number on specific haplotypes. We also describe partially successful initial strategies for analyzing mCNVs via imputation and provide an initial data resource to support such analyses.

Thursday, February 05, 2015

More GWAS hits for cognitive ability

More genome-wide significant associations between individual SNPs and cognitive ability from a sample of 50k individuals. See also First GWAS hits for cognitive ability.
Genetic contributions to variation in general cognitive function: a meta-analysis of genome-wide association studies in the CHARGE consortium (N=53 949)

Nature Molecular Psychiatry advance online publication 3 February 2015 doi: 10.1038/mp.2014.188

General cognitive function is substantially heritable across the human life course from adolescence to old age. We investigated the genetic contribution to variation in this important, health- and well-being-related trait in middle-aged and older adults. We conducted a meta-analysis of genome-wide association studies of 31 cohorts (N=53 949) in which the participants had undertaken multiple, diverse cognitive tests. A general cognitive function phenotype was tested for, and created in each cohort by principal component analysis. We report 13 genome-wide significant single-nucleotide polymorphism (SNP) associations in three genomic regions, 6q16.1, 14q12 and 19q13.32 (best SNP and closest gene, respectively: rs10457441, P=3.93 × 10−9, MIR2113; rs17522122, P=2.55 × 10−8, AKAP6; rs10119, P=5.67 × 10−9, APOE/TOMM40). We report one gene-based significant association with the HMGN1 gene located on chromosome 21 (P=1 × 10−6). These genes have previously been associated with neuropsychiatric phenotypes. Meta-analysis results are consistent with a polygenic model of inheritance. To estimate SNP-based heritability, the genome-wide complex trait analysis procedure was applied to two large cohorts, the Atherosclerosis Risk in Communities Study (N=6617) and the Health and Retirement Study (N=5976). The proportion of phenotypic variation accounted for by all genotyped common SNPs was 29% (s.e.=5%) and 28% (s.e.=7%), respectively. Using polygenic prediction analysis, ~1.2% of the variance in general cognitive function was predicted in the Generation Scotland cohort (N=5487; P=1.5 × 10−17). In hypothesis-driven tests, there was significant association between general cognitive function and four genes previously associated with Alzheimer’s disease: TOMM40, APOE, ABCG1 and MEF2C.

See also Five years of GWAS discovery, and compare to progress with height SNPs.


For an overview of this subject see On the genetic architecture of intelligence and other quantitative traits.

Monday, February 02, 2015

Slate Star Codex on ability, effort, and achievement

Scott Alexander writes about ability, effort, and achievement at his blog Slate Star Codex. Like many of his excellent posts, this one has received hundreds of thoughtful comments. (Sequel: Part 2 is up.)

Scott has special insight into this question as consequence of a musically talented brother (who quickly surpassed Scott to become a piano prodigy and professional musician) and of having struggled with math despite being very bright. Experiences like these make clear the division between talent and effort, but they're not always easy to share with others.
Slate Star Codex: ... There are frequent political debates in which conservatives (or straw conservatives) argue that financial success is the result of hard work, so poor people are just too lazy to get out of poverty. Then a liberal (or straw liberal) protests that hard work has nothing to do with it, success is determined by accidents of birth like who your parents are and what your skin color is et cetera, so the poor are blameless in their own predicament.

I’m oversimplifying things, but again the compassionate/sympathetic/progressive side of the debate – and the side endorsed by many of the poor themselves – is supposed to be that success is due to accidents of birth, and the less compassionate side is that success depends on hard work and perseverance and grit and willpower.

The obvious pattern is that attributing outcomes to things like genes, biology, and accidents of birth is kind and sympathetic. Attributing them to who works harder and who’s “really trying” can stigmatize people who end up with bad outcomes and is generally viewed as Not A Nice Thing To Do.

And the weird thing, the thing I’ve never understood, is that intellectual achievement is the one domain that breaks this pattern.

Here it’s would-be hard-headed conservatives arguing that intellectual greatness comes from genetics and the accidents of birth and demanding we “accept” this “unpleasant truth”.

And it’s would-be compassionate progressives who are insisting that no, it depends on who works harder, claiming anybody can be brilliant if they really try, warning us not to “stigmatize” the less intelligent as “genetically inferior”.

I can come up with a few explanations for the sudden switch, but none of them are very principled and none of them, to me, seem to break the fundamental symmetry of the situation. ...

... I tried to practice piano as hard as he did. I really tried. But every moment was a struggle. I could keep it up for a while, and then we’d go on vacation, and there’d be no piano easily available, and I would be breathing a sigh of relief at having a ready-made excuse, and he’d be heading off to look for a piano somewhere to practice on. Meanwhile, I am writing this post in short breaks between running around hospital corridors responding to psychiatric emergencies, and there’s probably someone very impressed with that, someone saying “But you had such a great excuse to get out of your writing practice!”

I dunno. But I don’t think of myself as working hard at any of the things I am good at, in the sense of “exerting vast willpower to force myself kicking and screaming to do them”. It’s possible I do work hard, and that an outside observer would accuse me of eliding how hard I work, but it’s not a conscious elision and I don’t feel that way from the inside. ...
Pursuing this topic to the end leads to the difficult question of whether predispositions to hard work, conscientiousness, ambition, etc. are themselves heritable. Of course, the answer is yes, at least partly. Free Will? :-)

Sunday, February 01, 2015

Evo Psych for PUAs

Evolutionary Psychologist Geoffrey Miller is interviewed on this (for lack of a better description) PUA podcast. See also The new dating game.
Ep. #67 The State of Evolutionary Psychology and the Mating Mind with Geoffrey Miller

[Geoffrey Miller] Yeah I'd say about seventy percent of evolutionary psychology is about mating, attraction, physical attractiveness, mental attractiveness, potential conflicts between men and women, and how those play out. But then other evolutionary psych people study all kinds of other things, like the learning and memory that Wikipedia mentioned. ...

[Geoffrey Miller] Well one thing to note is it's a pretty new field. I was literally at Stanford University when the field got invented by some of the leading people, who kind of had a joint retreat there at a place called The Center for Advanced Study in Behavioral Sciences. 1989, 1990.

And they actually strategized about, "How do we create this new field? What should we call it? How do we launch it? What kind of scientific societies and journals do we establish?"

So the field's only twenty-five years old. It started out pretty strongly though, because the people who went into it were brilliant, really world-class geniuses, and that's one of the things that attracted me to the field when I was a grad student.

Since then, the quality of the research has gotten way better. It's a very progressive field in the sense that we actually build on each other's insights. Other areas of psychology, everybody wants to coin and patent their own little term, their own, almost, trademarked little theory, and try to ignore a lot of what other people do.

We tend to be in more of the tradition of mainstream biology, where you actually respect what other people have done before, and try to build on it. So I think we're really good at doing that.

The other thing to remember, apart from it being a young field, is it's a pretty small field. There's fewer than a thousand people in the world actively doing evolutionary psych research, compared to fifty thousand people doing neuroscience research, or probably a hundred thousand scientists doing cancer research.

So it's not a huge field. There's probably more science journalists trying to cover evolutionary psychology than there are evolutionary psych researchers. ...

[Geoffrey Miller] Well I'll tell you what areas of science really impress me at the moment, in terms of being super high-quality and sophisticated. One is behavior genetics. Twin studies. So I did a sabbatical in Brisbane, Australia with one of the big twin research groups, back in 2007.

And they were just making this shift. They had tracked thirty thousand pairs of twins in Australia for the previous twenty years, and given them literally hundreds of surveys, and measurements, and experiments over the years. And they were just starting to collect DNA from all these twin pairs.

And what you have now is big international networks of people working in behavior genetics, sharing their data, publishing papers with fifty or a hundred scientists on the paper, working together and being able to identify, "Hey, here's where the genes for, like, how sexually promiscuous you are overlap with the genes for this personality trait, or the genes for this physical health trait."

And it's amazingly sophisticated. It's powerful. The datasets are huge. The problem is a lot of that stuff is very politically incorrect, and it makes people uncomfortable. And people are like, "You can't say that propensities for murdering people are genetic. Or, propensities for having a lot of musical creativity are genetic," people don't want to hear that. So there's a big kind of ideological problem there. But honestly that's where some of the best research is being done in the behavioral sciences. ...

[Geoffrey Miller] Well one big thing is I think a lot of the pickup artist guys who quote The Mating Mind book, or refer to evolutionary psychology, get all obsessed with status, and they talk about alpha males, and beta males, and gamma males, and omega males, and whatever. Status, status, status. And that's fine. Status is important, no doubt.

But the idea that you can simply categorize human males into, "Oh, you're an alpha. You're a beta." That works for gorillas. It works for orangutans, where the different statuses are actually associated with different body sizes. Like an alpha orangutan is literally twice as heavy as a beta orangutan, and has huge cheek pads, and the beta doesn't. And they have completely different mating strategies.

But for humans, status is way more complicated. It's fluid, it depends on context. ...

Wednesday, January 28, 2015

Crypto-currencies, Bitcoin and Blockchain

Photos from two meetings I attended last week.

Some general comments on crypto-currencies:

1. Bitcoin doesn't really solve any payment problems, unless of course you are a paranoid libertarian who hates "fiat" currencies. But why should you trust the Bitcoin Foundation any more than you trust a central bank? (See Bitcoin dynamics.)

2. Most potential users just want something that works and don't care at all about crypto magic.

3. The high volatility of Bitcoin makes it unattractive as a store of value, except for speculators looking for price appreciation. It's possible that confidence in and the liquidity of Bitcoin (or another crypto coin) will rise to the point that this problem is eliminated. At that point things will get much more interesting. However, it's not clear what the timescale is for this (but see point 7 below).

4. Blockchain processing is extremely inefficient and has a high cost overhead.

5. Ethereum, with its Turing-complete blockchain operations, does make possible low-cost derivative contracts, insurance, etc. But I have yet to hear a convincing case for a killer application. Gambling is an obvious use, but the US government has shown a strong inclination to pursue those involved with illegal online gambling.

6. Innovation in payment technologies is long overdue, but because of positive network effects it will probably be a big player like Apple or Google that finally changes the landscape.

7. One interesting scenario is for a country (Singapore? Denmark?) or large financial entity (Goldman, JPM, Visa) to issue its own crypto currency, managing the blockchain itself but leaving it in the public domain so that third parties (including regulators) can verify transactions. Confidence in this kind of "Institutional Coin" (IC) would be high from the beginning. An IC with Ethereum-like capabilities could revolutionize the financial industry. In place of an opaque web of counterparty relationships leading to systemic risk, the IC blockchain would be easily audited by machine. Regulators would require that the IC authority know its customers, so pseudonymity would only be partial.








Here's a good podcast on crypto-currencies for non-experts.
Wall Street journalists Paul Vigna and Michael J. Casey talk about cybermoney in The Age of Cryptocurrency: How Bitcoin and Digital Money are Challenging the Global Economic Order. Vigna and Casey argue that digital currency is poised to launch a revolution that could reinvent traditional financial and social structures, and bring the world's billions of "unbanked" individuals into a new global economy.

Monday, January 26, 2015

Dept. of Physicists Can Do Stuff: Harold Brown


Ashton Carter won't be the first physicist to lead the Department of Defense. Harold Brown was President of Caltech and Secretary of Defense under Carter.

Caltech Oral History interview:
BROWN: You asked me first for a brief review of my career before coming to Caltech. I grew up in New York [City] and went to Columbia University for my undergraduate and graduate degrees, all of them in physics. As a result of acceleration during the war years, I was not quite eighteen when I got my bachelor’s degree in 1945. But then I was somewhat tardy in getting graduate degrees. It took me about four years more to get my PhD. That was in nuclear physics. The area I actually worked on was beta-ray spectroscopy.

... Early in my graduate career, I saw that some of the other graduate students were going to do better in pure science than I, maybe because they were smarter or better researchers, but at least as much, it seemed to me, because they were able to focus very, very strongly on a narrow piece of research that they were doing. And it’s often been said that Nobel Prizes are won by people partly through brilliance but largely through a combination of an ability to focus very narrowly and to have an unhappy family life. [Laughter] Those things may go together to produce the intensity that, in addition to brilliance, creates Nobel Prize winners. I concluded that I was not going to be able to focus that strongly because I had too many other interests. That drew me away from theoretical and toward experimental physics first, and then toward applied science, and then toward managing. So I think it may have been a combination of that sort.

... I found Christy’s attitude very interesting. Christy, of course, is a theoretical physicist. [See The Christy Gadget.] But when I asked him ... he said that he had always found that there were applied problems that had as much inherent intellectual interest [as those relating more to “pure” science]. And if they met that criterion, he was perfectly willing, and felt others should be willing, to look to such applied problems where they had important impacts, economically, environmentally, or whatever. There were enough other faculty people who felt the same way, so we were able to do some of these things.

... There was, I think, a severe split among the faculty on this matter. ... A good many of the science and engineering faculty regard social scientists with much more hostility than they regard humanists—partly because they feel that the word “science” in social science is a lie and see it as an attempt to appropriate some of the prestige that correctly applies to the physical, biological, and mathematical sciences and even to technology and engineering. A good many of the science faculty say that whereas the humanities have a distinct different dimension to bring to bear, social science is pseudoscience and that any relationship to “science” is nonsense. That attitude created some interesting faculty meetings. But in retrospect, I can’t say that that prevented the institute from going ahead and making social-science appointments, or that the social scientists were driven away by this attitude. The best and most prominent of them have quite as much self-esteem as many scientists and engineers, although not as much as some of Caltech’s scientists.

... The economists, in particular, were perhaps at the cutting edge of this dispute. In many ways, they are the most prestigious of social scientists, because they purport to be able to predict or influence the real world more nearly the way scientists and engineers do, than can the sociologists or the anthropologists. At Caltech, many of those who were skeptical about them — and in this I tended to share their skepticism to some degree — said, “The economists who have the most academic prestige and win Nobel Prizes are the ones who are most analytical and pretend to be most like the scientists.” In fact, of course, in order to be analytical, they have to assume away most of the driving forces in real economic behavior. By creating the ideal economic man, they eliminate all the real psychology, and that’s what determines economic behavior.

... One thing I learned from Caltech—it wasn’t the first time I learned it, but I learned it perhaps in intensified form—is that an institution depends on a number of very high-quality people. That number can be small or large. I don’t think I had been at a place before that had quite such a concentration of intellectual power in a not-so-narrow—but not universal, either— area of human ability. It reinforced in me the belief that people who are very good at what they do are likely to be more understanding of other people’s talents than people who aren’t very good at what they do and who therefore try to do other things that they’re not very good at either.
Another interview from the archives:
LEVIN: When you came to Caltech, what did you expect from the Caltech undergraduate? What did you expect him to be-aside from his academic capability? Have you been surprised or disappointed in any way?

BROWN: I had been told about Caltech students' practical jokes, and I have seen some of those come off pretty well. I had been told that quite aside from their academic proficiency, they were also very, very intelligent - which is not the same thing. They are very good at spotting flaws in arguments, any arguments, and they are not easily put off by authoritative but incorrect statements. And I have been quite satisfied, pleased, and impressed with what I have seen. I would add that Caltech undergraduates have turned out to be somewhat less self-assured and socially at ease than I had expected. But that has a certain charm; it's not a total loss.

Astrophysical Repulsion from Dark Energy

The manifestation of dark energy on cosmological scales is well known: gravitational repulsion which leads to the accelerating expansion of the universe. Perhaps surprisingly, there are potentially observable effects on galactic length scales as well.
The Dark Force: Astrophysical Repulsion from Dark Energy (http://arxiv.org/abs/1501.05952)

Chiu Man Ho, Stephen D. H. Hsu

Dark energy (i.e., a cosmological constant) leads, in the Newtonian approximation, to a repulsive force which grows linearly with distance. We discuss possible astrophysical effects of this "dark" force. For example, the dark force overcomes the gravitational attraction from an object (e.g., dwarf galaxy) of mass $10^7 M_\odot$ at a distance of $~ 23$ kpc. It seems possible that observable velocities of bound satellites (rotation curves) could be significantly affected, and therefore used to measure the dark energy density.

Wednesday, January 21, 2015

Seasons and Veritas

Harvard graduates explain why we have seasons. If only their understanding matched their confidence.



See also Why is it dark at night?  ,  Inside HBS: "kill, f^^k or marry"  ,  Frauds!  and
High V, Low M: ... high verbal ability is useful for appearing to be smart, or for winning arguments and impressing other people, but it's really high math ability that is useful for discovering things about the world -- that is, discovering truth or reasoning rigorously.

Tuesday, January 20, 2015

Venture capital in the 1980s


Via Dominic Cummings (@odysseanproject), this long discussion of the history of venture capital, which emphasizes the now largely forgotten 1980s. VC in most parts of the developed world, even large parts of the US, resembles the distant past of the above chart. There is a big gap between Silicon Valley and the rest.
Heat Death: Venture Capital in the 1980s

... Risk is uncertainty about the future. High technical risk means not knowing if a technology will work. High market risk means not knowing if there will be a market for your product. These are the primary risks that the VC industry as a whole contemplates. (There are other risks extrinsic to individual companies, like regulatory risk, but these are less frequent.)

Each type of risk has a different effect on VC returns. Technical risk is horrible for returns, so VCs do not take technical risk. There are a handful of examples of high technical risk companies that had great returns–Genentech43, for example–but they are few44. Today, VCs wait until there is a working prototype before they fund, but successful VCs have always waited until the technical risk was mitigated. Apple Computer, for example, did not have technical risk: the technology worked before the company was funded.

Market risk, on the other hand, is directly correlated to VC returns. When Apple was funded no one had any way of knowing how many people would buy a personal computer; the ultimate size of the market was analytically unknowable. DEC, Intel, Google, etc. all went into markets that they helped create. High market risk is associated with the best VC investments of all time. In the late ’70s/early ’80s and again in the mid to late ’90s VCs were comfortable funding companies with mind-boggling market risk, and they got amazing returns in exchange. In the mid to late ’80s they were scared and funded companies with low market risk instead, and returns were horrible.

Today is like the 1980s. There are a plethora of me-too companies, companies with a new angle on a well-understood market, and companies founded with the hopes of being acquired before they need to bring on many customers. VCs are insisting on market validation before investing, and are putting money into sectors that have already seen big exits (a sign of a market that has already emerged.)

Saying VCs used to take high technical risk and now take high market risk is both an overly optimistic view of the past–the mythical golden age of heroic VCs championing the development of new technologies–and an overly optimistic view of the present–gutsy VCs funding radical innovations that create entirely new markets. Neither of these things is true. VCs have never funded technical risk and they are not now funding market risk45. The VC community is purposely avoiding risk because we think we can make good returns without taking it. The lesson of the 1980s is that no matter how appealing this fantasy is, it’s still a fantasy.

Monday, January 19, 2015

16 years of training

Joe Rogan (UFC commentator) receives his BJJ black belt from Eddie Bravo.




Now go train jiujitsu.




Sunday, January 18, 2015

Measuring college learning outcomes: psychometry 101

Pressure is growing for outcomes testing in higher education. Already hundreds of schools allow graduating seniors to take the CLA+ (Collegiate Learning Assessment Plus) as evidence of important job skills. I doubt that the CLA+ adds much information concerning an applicant's abilities beyond what can be obtained from existing cognitive tests such as SAT, ACT, GRE. But those tests have plenty of enemies, creating a business opportunity for shiny new assessments. The results covered below will contain no surprises to anyone modestly familiar with modern psychometrics.
Forbes: More people than ever are asking the question: is college worth it? Take a look at the numbers: 81 percent of respondents in a 2008 survey of the general public agreed that a college education was a good investment, but that number was down to 57 percent in 2012. A recent Wells Fargo study reported that one-third of Millennials regret going to college, and instead say they would have been better off working and earning money. This rhetoric is reflected in the reality of declining enrollment: one survey of colleges showed that enrollment for spring 2013 was down 2.3 percent from spring 2012, a trend that has held for consecutive years.

Meanwhile, the wave of keeping colleges accountable for their outcomes continues to crest, even from the left. A recent Brookings Institution study concluded, “While the average return to obtaining a college degree is clearly positive, we emphasize that it is not universally so.” President Obama, a major recipient of plaudits and campaign dollars from the academic left, has called for a government-authored set of rankings for American colleges and universities that rewards performance and punishes failure: “It is time to stop subsidizing schools that are not producing good results, and reward schools that deliver for American students and our future,” he said.

President Obama’s impulse to define and reward value in higher education was correct, but a government-rankings system is not a sufficient corrective for the enormity of the problem. There is no panacea for reforming higher education, but the CLA+ exam has potential to be a very useful step. ...
More from the Wall Street Journal.
WSJ: A survey of business owners to be released next week by the American Association Colleges and Universities also found that nine out of 10 employers judge recent college graduates as poorly prepared for the work force in such areas as critical thinking, communication and problem solving.

“Employers are saying I don’t care about all the knowledge you learned because it’s going to be out of date two minutes after you graduate ... I care about whether you can continue to learn over time and solve complex problems,” said Debra Humphreys, vice president for policy and public engagement at AAC&U, which represents more than 1,300 schools.

The CLA+ [Collegiate Learning Assessment Plus] is graded on a scale of 400 to 1600. In the fall of 2013, freshmen averaged a score of 1039, and graduating seniors averaged 1128, a gain of 89 points.

CAE says that improvement is evidence of the worth of a degree. “Colleges and universities are contributing considerably to the development of key skills that can make graduates stand out in a competitive labor market,” the report said.

Mr. Arum was skeptical of the advantages accrued. Because the test was administered over one academic year, it was taken by two groups of people. A total of 18,178 freshmen took the test and 13,474 seniors. That mismatch suggested a selection bias to Mr. Arum.

“Who knows how many dropped out? They were probably the weaker students,” he said. [ THIS LAST POINT IS ADDRESSED BELOW. SCORE GAINS AFTER ADJUSTMENT FOR THIS EFFECT ARE MODEST -- TYPICALLY LESS THAN 0.5 SD; SOMETIMES CONSISTENT WITH ZERO. ]
What exactly are these college learning assessments? They measure general skills that employers deem important, but not narrow subject matter expertise -- some of which is economically valuable (e.g., C++ coding) and some much less so (e.g., detailed knowledge about the Reformation). Of course, narrow job-essential knowledge can be tested separately.
What Does CLA+ Measure?

CLA+ [Collegiate Learning Assessment Plus] is designed specifically to measure critical-thinking and written-communication skills that other assessments cannot. CAE has found that these are the skills that most accurately attest to a student’s readiness to enter the workforce. In the era of Google, the ability to recall facts and data is not as crucial as it once was. As our technology evolves to meet certain needs of the workplace, so too must our thinking about success and career readiness. Therefore, the skills taught in higher education are changing; less emphasis is placed on content-specific knowledge, and more is placed on critical-thinking skills, such as scientific and quantitative reasoning, analysis and problem solving, and writing effectiveness and mechanics. That is why CLA+ focuses on these skills and why CAE believes employers should use this tool during recruitment efforts.

Sample.
Two important questions:

1) Are the CLA+ and related assessments measuring something other than the general cognitive ability of individuals who have had many years (K-12 plus at least some college) of education?

2) By how much does a college education improve CLA+ scores?

The study below, which involved 13 colleges (ranging from MIT, Michigan, Minnesota, to Cal-State Northridge, Alabama A&M, ...) gives some hints at answers.
Test Validity Study (TVS) Report

This study examined whether commonly used measures of college-level general educational outcomes provide comparable information about student learning. Specifically, do the students and schools earning high scores on one such test also tend to earn high scores on other tests designed to assess the same or different skills? And, are the strengths of these relationships related to the particular tests used, the skills (or “constructs”) these tests are designed to measure (e.g., critical thinking, mathematics, or writing), the format they use to assess these skills (multiple-choice or constructed-response), or the tests’ publishers? We also investigated whether the difference in mean scores between freshmen and seniors was larger on some tests than on others. Finally, we estimated the reliability of the school mean scores on each measure to assess the confidence that can be placed in the test results.

Effect sizes are modest. The result "d+" in the table below is the average increase in score between freshmen and seniors tested, in units of standard deviations. An individual's score as a freshman is probably a very good predictor of their score as a senior. (To put it crudely, additional years of expensive post-secondary education do not increase cognitive ability by very much. What cognitive tests measure is fairly stable, despite the efforts of educators.)

Note, in order to correct for the problem that weaker students drop out between freshman and senior years, and hence the senior population is academically stronger, the researchers adjusted effect sizes. The adjustment used was simply the average SAT score difference (in SD units) between seniors and freshmen in each school's sample (students who survive to senior year tend to have higher SAT scores -- go figure!). In other words, to get their final results, the researchers implicitly acknowledged that these new tests are largely measuring the same general cognitive abilities as the SAT!


Below are school-level correlations and reliabilities on various assessments, which show that cognitive constructs ("critical thinking", "mathematics", etc.) are consistently evaluated regardless of specific test used. Hint: ACT, SAT, GRE, PISA, etc. would have worked just as well ...

The results below are also good evidence for a school-level general factor of ability = "G". The researchers don't release specific numbers, but I'd guess MIT has a much higher G than some of the lower ranked schools, and that the value of G can be deduced just from average SAT score of the school.


Does the CLA have validity in predicting job and life outcomes? Again, experienced psychometricians know the answer, but stay tuned as data gradually accumulate.
Documenting Uncertain Times: Post-graduate Transitions of the Academically Adrift Cohort

Graduates who scored in the bottom quintile of the CLA were three times more likely to be unemployed than those who scored in the top quintile on the CLA (9.6 percent compared to 3.1 percent), twice as likely to be living at home (35 percent compared to 18 percent) and significantly more likely to have amassed credit card debt (51 percent compared to 37 percent).

Tuesday, January 13, 2015

Analogies between Analogies

As reported by Stan Ulam in Adventures of a Mathematician:
"A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies."  --Stefan Banach
See also Analogies between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators; esp. article 20 On the Notion of Analogy and Complexity in Some Constructive Mathematical Schemata.

I'll add my own comment:
The central problem of modern genomics is essentially cryptographic. The encryption scheme is the model relating phenotype to genotype, and the ciphertext--plaintext pairs are the genotypes and phenotypes. We will recover the schemes -- models which can predict phenotype from genotype -- once enough ciphertext and plaintext (data) is available for analysis.

We have programs (DNA code) and their outputs (organisms) to study; from this we deduce the programming language.
See also Alan Turing:
“There is a remarkably close parallel between the problems of the physicist and those of the cryptographer. The system on which a message is enciphered corresponds to the laws of the universe, the intercepted messages to the evidence available, the keys for a day or a message to important constants which have to be determined. The correspondence is very close, but the subject matter of cryptography is very easily dealt with by discrete machinery, physics not so easily.”

Locality and Nonlinear Quantum Mechanics v2

The paper below will appear in Int.J.Mod.Phys.A. We went through a strange series of referees during the last year, with reactions ranging from

this result is correct but trivial 

to

this result is highly nontrivial but possibly correct

to, finally,

this is a nice result and should be published. 

I would like to lock the first two referees in a room to exchange ideas! (What!? Of course, HE is the idiot, not ME!)

A consequence of the process is the added appendix which proves (rather pedantically)

Ψ[ φ ]  ≈  ψ_A [ φ_A ]  ×  ψ_B [ φ_B ]  ×  · · ·

for our coherent state example.
Locality and Nonlinear Quantum Mechanics
(http://arxiv.org/abs/1401.7018)

Chiu Man Ho, Stephen D.H. Hsu

Nonlinear modifications of quantum mechanics generically lead to nonlocal effects which violate relativistic causality. We study these effects using the functional Schrodinger equation for quantum fields and identify a type of nonlocality which causes nearly instantaneous entanglement of spacelike separated systems. We describe a simple example involving widely separated wave-packet (coherent) states, showing that nonlinearity in the Schrodinger evolution causes spacelike entanglement, even in free field theory.
Some excerpts:
The linear structure of quantum mechanics has deep and important consequences, such as the behavior of superpositions. One is naturally led to ask whether this linearity is fundamental, or merely an approximation: Are there nonlinear terms in the Schrodinger equation?

Nonlinear quantum mechanics has been explored in [1–6]. It has been observed that the fictitious violation of locality in the Einstein-Podolsky-Rosen (EPR) experiment in conventional linear quantum mechanics might become a true violation due to nonlinear effects [7, 8] (in [8] signaling between Everett branches is also discussed). This might allow superluminal communication and violate relativistic causality. These issues have subsequently been widely discussed [9].

Properties such as locality or causality are difficult to define in non-relativistic quantum mechanics (which often includes, for example, “instantaneous” potentials such as the Coulomb potential). Therefore, it is natural to adopt the framework of quantum field theory: Lorentz invariant quantum field theories are known to describe local physics with relativistic causality (influences propagate only within the light cone), making violations of these properties easier to identify. ...

... Our results suggest that nonlinearity in quantum mechanics is associated with violation of relativistic causality. We gave a formulation in terms of factorized (unentangled) wavefunctions describing spacelike separated systems. Nonlinearity seems to create almost instantaneous entanglement of the two systems, no matter how far apart. Perhaps our results are related to what Weinberg [11] meant when he wrote “... I could not find any way to extend the nonlinear version of quantum mechanics to theories based on Einstein’s special theory of relativity ... At least for the present I have given up on the problem: I simply do not know how to change quantum mechanics by a small amount without wrecking it altogether.”
See also Wrong, Trivial, Not Original.

Boom, Bust, and the Global Race for Scientific Talent


Michael Tietelbaum's new book on STEM labor markets and human capital is reviewed by John McGowan. John and I attended Caltech together many years ago.
Falling Behind? Boom, Bust, and the Global Race for Scientific Talent
by Michael S. Teitelbaum
Princeton University Press
March 30, 2014

Introduction

Falling Behind? is a recent (March 2014) book by Michael Teitelbaum of the Sloan Foundation, a demographer and long time critic of STEM (Science, Technology, Engineering and Mathematics) shortage claims. Falling Behind? is an excellent book with a wealth of data and information on the history of booms and busts in science and engineering employment since World War II, STEM shortage claims in general, and lobbying for “high-skilled” immigration “reform”. Although I have been a student of these issues for many years, I encountered many facts and insights that I did not know or had not thought of. Nonetheless the book has a number of weakenesses which readers should keep in mind.

... The evidence assembled in this book leads inescapably to three core findings:

o First, that the alarms about widespread shortages or shortfalls in the number of U.S. scientists and engineers are quite inconsistent with nearly all available evidence;

o Second, that similar claims of the past were politically successful but resulted in a series of booms and busts that did harm to the U.S. science and engineering enterprise and made careers in these fields increasingly unattractive; and

o Third, that the clear signs of malaise in the U.S. science and engineering workforce are structural in origin and cannot be cured simply by providing additional funding. To the contrary, recent efforts of this kind have proved to be destabilizing, and advocates should be careful what they wish for. ...
See also A Tale of Two Geeks.

Blog Archive

Labels