Monday, February 25, 2013

Google Glass

What's it like to try Google Glass?

Is it ready for everyone right now? Not really. Does the Glass team still have huge distance to cover in making the experience work just the way it should every time you use it? Definitely.

But I walked away convinced that this wasn’t just one of Google’s weird flights of fancy. The more I used Glass the more it made sense to me; the more I wanted it. If the team had told me I could sign up to have my current glasses augmented with Glass technology, I would have put pen to paper (and money in their hands) right then and there. And it’s that kind of stuff that will make the difference between this being a niche device for geeks and a product that everyone wants to experience.

After a few hours with Glass, I’ve decided that the question is no longer ‘if,’ but ‘when?’

Friday, February 22, 2013

The nature of intuition

From an excellent blog post by Emanuel Derman. Derman contrasts Kahneman's use of "intuition" as quick insight with the physicist or mathematician's use of "intuition" to describe deep understanding operating at a subconscious level.
Kahneman’s "intuition" = a quick guess; I mean by intuition the insight that can come only after long mental struggles.

Kahneman is concerned with the biases of intuition. I am impressed with its occasional glimpses of absolute essence. Think Newton, Ampere, Maxwell, Einstein, Feynman, Spinoza or Freud or Schopenhauer maybe … That kind of intuition plays a major role in the discovery of nature’s truths.

Intuition is comprehensive. It unifies the subject with the object, the understander with the understood, the archer with the bow. Intuition isn’t easy to come by, but is the result of arduous struggle.

In both physics and finance the first major struggle is to gain some intuition about how to proceed; the second struggle is to transform that intuition into something more formulaic, a set of rules anyone can follow, rules that no longer require the original insight itself. ...

I believe that the clue to his mind is to be found in his unusual powers of continuous concentrated introspection. . . . His peculiar gift was the power of holding continuously in his mind a purely mental problem until he had seen straight through it. I fancy his pre-eminence is due to his muscles of intuition being the strongest and most enduring with which a man has ever been gifted. Anyone who has ever attempted pure scientific or philosophical thought knows how one can hold a problem momentarily in one’s mind and apply all one’s powers of concentration to piercing through it, and how it will dissolve and escape and you find that what you are surveying is a blank. I believe that Newton could hold a problem in his mind for hours and days and weeks until it surrendered to him its secret. Then being a supreme mathematical technician he could dress it up, how you will, for purposes of exposition, but it was his intuition which was pre-eminently extraordinary—“so happy in his conjectures,” said De Morgan, “as to seem to know more than he could possibly have any means of proving.”
Wigner on Einstein's intuition versus von Neumann's raw intellectual power:
I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me.

... But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original.
See also Ulam:
[p.81] When we talked about Einstein, Johnny [von Neumann] would express the usual admiration for his epochal discoveries which had come to him so effortlessly ... But his admiration seemed mixed with some reservations, as if he thought, "Well, here he is, so very great," yet knowing his limitations. [ See also Feyerabend on the giants. ] ... I once asked Johnny whether he thought that Einstein might have developed a sort of contempt for other physicists, including even the best and most famous ones -- that he had been deified and lionized too much... Johnny agreed... "he does not think too much of others as possible rivals in the history of physics of our epoch."

Monday, February 18, 2013

In search of principles: when biology met physics

This is an excerpt from the introduction of Bill Bialek's book on biophysics. Bialek was a professor at Berkeley when I was a graduate student, but has since moved to Princeton. See also For the historians and the ladiesAs flies to wanton boys are we to the gods and Prometheus in the basement.
... In one view of history, there is a direct path from Bohr, Delbruck and Schrodinger to the emergence of molecular biology. Certainly Delbruck did play a central role, not least because of his insistence that the community should focus (as the physics tradition teaches us) on the simplest examples of crucial biological phenomena, reproduction and the transmission of genetic information. The goal of molecular biology to reduce these phenomena to interactions among a countable set of molecules surely echoed the physicists’ search for the fundamental constituents of matter, and perhaps the greatest success of molecular biology is the discovery that many of these basic molecules of life are universal, shared across organisms separated by hundreds of millions of years of evolutionary history. Where classical biology emphasized the complexity and diversity of life, the first generation of molecular biologists emphasized the simplicity and universality of life’s basic mechanisms, and it is not hard to see this as an influence of the physicists who came into the field at its start.

Another important idea at the start of molecular biology was that the structure of biological molecules matters. Although modern biology students, even in many high schools, can recite ‘structure determines function,’ this was not always obvious. To imagine, in the years immediately after World War II, that all of classical biochemistry and genetics would be reconceptualized once we could see the actual structures of proteins and DNA, was a revolutionary vision—a vision shared only by a handful of physicists and the most physical of chemists. Every physicist who visits the grand old Cavendish Laboratory in Cambridge should pause in the courtyard and realize that on that ground stood the ‘MRC hut,’ where Bragg nurtured a small group of young scientists who were trying to determine the structure of biological molecules through a combination of X–ray diffraction experiments and pure theory. To make a long and glorious story short, they succeeded, perhaps even beyond Bragg’s wildest dreams, and some of the most important papers of twentieth century biology thus were written in a physics department.

Perhaps inspired by the successes of their intellectual ancestors, each subsequent generation of physicists offered a few converts. The idea, for example, that the flow of information through the nervous system might be reducible to the behavior of ion channels and receptors inspired one group, armed with low noise amplifiers, intuition about the interactions of charges with protein structure, and the theoretical tools to translate this intuition into testable, quantitative predictions. The possibility of isolating a single complex of molecules that carried out the basic functions of photosynthesis brought another group, armed with the full battery of modern spectroscopic methods that had emerged in solid state physics. Understanding that the mechanical forces generated by a focused laser beam are on the same scale as the forces generated by individual biological molecules as they go about their business brought another generation of physicists to our subject. The sequencing of whole genomes, including our own, generated the sense that the phenomena of life could, at last, be explored comprehensively, and this inspired yet another group. These examples are far from complete, but give some sense for the diversity of challenges that drew physicists toward problems that traditionally had been purely in the domain of biologists. ...

... we proceed to explore three candidate principles: the importance of noise, the need for living systems to function without fine tuning of parameters, and the possibility that many of the different problems solved by living organisms are just different aspects of one big problem about the representation of information. Each of these ideas is something which many people have explored, and I hope to make clear that these ideas have generated real successes. The greatest successes, however, have been when these theoretical discussions are grounded in experiments on particular biological systems. As a result, the literature is fragmented along lines defined by the historical subfields of biology. The goal here is to present the discussion in the physics style, organized around principles from which we can derive predictions for particular examples. ...

Sunday, February 17, 2013

Weinberg on quantum foundations

I have been eagerly awaiting Steven Weinberg's Lectures on Quantum Mechanics, both because Weinberg is a towering figure in theoretical physics, and because of his cryptic comments concerning the origin of probability in no collapse (many worlds) formulations:
Einstein's Mistakes
Steve Weinberg, Physics Today, November 2005

Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wavefunction (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?

Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. [ITALICS MINE. THIS REMINDS OF FERMAT'S COMMENT IN THE MARGIN!] It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wavefunction, the Schrödinger equation, to observers and their apparatus. The difficulty is not that quantum mechanics is probabilistic—that is something we apparently just have to live with. The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics. ...
Weinberg's coverage of quantum foundations in section 3.7 of the new book is consistent with what is written above, although he does not resolve the question of how probability arises from the deterministic evolution of the wavefunction. (See here for my discussion, which involves, among other things, the distinction between objective and subjective probabilities; the latter can arise even in a deterministic universe).

1. He finds Copenhagen unsatisfactory: it does not allow QM to be applied to the observer and measuring process; it does not have a clean dividing line between observer and system.

2. He finds many worlds (no collapse, decoherent histories, etc.) unsatisfactory not because of the so-called basis problem (he accepts the unproved dynamical assumption that decoherence works as advertised), but rather because of the absence of a satisfactory origin of the Born rule for probabilities. (In other words, he doesn't elaborate on the "considerable progress..." alluded to in his 2005 essay!)

Weinberg's concluding paragraph:
There is nothing absurd or inconsistent about the ... general idea that the state vector serves only as a predictor of probabilities, not as a complete description of a physical system. Nevertheless, it would be disappointing if we had to give up the "realist" goal of finding complete descriptions of physical systems, and of using this description to derive the Born rule, rather than just assuming it. We can live with the idea that the state of a physical system is described by a vector in Hilbert space rather than by numerical values of the positions and momenta of all the particles in the system, but it is hard to live with no description of physical states at all, only an algorithm for calculating probabilities. My own conclusion (not universally shared) is that today there is no interpretation of quantum mechanics that does not have serious flaws [italics mine] ...
It is a shame that very few working physicists, even theoreticians, have thought carefully and deeply about quantum foundations. Perhaps Weinberg's fine summary will stimulate greater awareness of this greatest of all unresolved problems in science.
"I am a Quantum Engineer, but on Sundays I have principles." -- J.S. Bell

Friday, February 15, 2013

A Genetic Code for Genius?

Another BGI Cognitive Genomics story, this time in the Wall Street Journal. I think coverage in the popular press is beneficial if it gets people to think through the implications of future genomic technology. It seems likely that the technology will arrive well before our political leadership and punditocracy have a firm understanding of the consequences. (In support of my point, see the comments on the article at the WSJ site; more at Marginal Revolution.)
WSJ: ... Mr. Zhao is a high-school dropout who has been described as China's Bill Gates. He oversees the cognitive genomics lab at BGI, a private company that is partly funded by the Chinese government.

At the Hong Kong facility, more than 100 powerful gene-sequencing machines are deciphering about 2,200 DNA samples, reading off their 3.2 billion chemical base pairs one letter at a time. These are no ordinary DNA samples. Most come from some of America's brightest people—extreme outliers in the intelligence sweepstakes.

... "People have chosen to ignore the genetics of intelligence for a long time," said Mr. Zhao, who hopes to publish his team's initial findings this summer. "People believe it's a controversial topic, especially in the West. That's not the case in China," where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

The roots of intelligence are a mystery. Studies show that at least half of the variation in intelligence quotient, or IQ, is inherited. But while scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The Hong Kong researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ.

Their conclusions could lay the groundwork for a genetic test to predict a person's inherited cognitive ability. Such a tool could be useful, but it also might be divisive. ...

The City and The Street

Michael Lewis writes in the NY Review of Books.

Early 1990s, hanging out in Manhattan with some friends in the derivatives business, one of them an Oxbridge guy who had been at graduate school at Harvard: when I used the then new term "financial engineering" in conversation he burst out laughing. "Is that what they're going to call it?" he asked, incredulous. "It's just bollocks."

See also The illusion of skill.
NYBooks: ... If you had to pick a city on earth where the American investment banker did not belong, London would have been on any shortlist. In London, circa 1980, the American investment banker had going against him not just widespread commercial lassitude but the locals’ near-constant state of irony. Wherever it traveled, American high finance required an irony-free zone, in which otherwise intelligent people might take seriously inherently absurd events: young people with no experience in finance being paid fortunes to give financial advice, bankers who had never run a business orchestrating takeovers of entire industries, and so on. It was hard to see how the English, with their instinct to not take anything very seriously, could make possible such a space.

Yet they did. And a brand-new social type was born: the highly educated middle-class Brit who was more crassly American than any American. In the early years this new hybrid was so obviously not an indigenous species that he had a certain charm about him, like, say, kudzu in the American South at the end of the nineteenth century, or a pet Burmese python near the Florida Everglades at the end of the twentieth. But then he completely overran the place. Within a decade half the graduates of Oxford and Cambridge were trying to forget whatever they’d been taught about how to live their lives and were remaking themselves in the image of Wall Street. Monty Python was able to survive many things, but Goldman Sachs wasn’t one of them.

The introduction into British life of American ideas of finance, and success, may seem trivial alongside everything else that was happening in Great Britain at the time (Mrs. Thatcher, globalization, the growing weariness with things not working properly, an actually useful collapse of antimarket snobbery), but I don’t think it was. The new American way of financial life arrived in England and created a new set of assumptions and expectations for British elites—who, as it turned out, were dying to get their hands on a new set of assumptions and expectations. The British situation was more dramatic than the American one, because the difference between what you could make on Wall Street versus doing something useful in America, great though it was, was still a lot less than the difference between what you could make for yourself in the City of London versus doing something useful in Great Britain.

In neither place were the windfall gains to the people in finance widely understood for what they were: the upside to big risk-taking, the costs of which would be socialized, if they ever went wrong. For a long time they looked simply like fair compensation for being clever and working hard. But that’s not what they really were; and the net effect of Wall Street’s arrival in London, combined with the other things that were going on, was to get rid of the dole for the poor and replace it with a far more generous, and far more subtle, dole for the rich. The magic of the scheme was that various forms of financial manipulation appeared to the manipulators, and even to the wider public, as a form of achievement. All these kids from Oxford and Cambridge who flooded into Morgan Stanley and Goldman Sachs weren’t just handed huge piles of money. They were handed new identities: the winners of this new marketplace. They still lived in England but, because of the magnitude of their success, they were now detached from it. ...

The uses of gloom

Omri Tal writes on the history of The Gloomy Prospect. Apparently, the term originally referred to non-shared environmental effects (see Random microworlds), before Turkheimer applied it to genetic causation.

Personally, I'm an optimist -- I believe in Pessimism of the Intellect but Optimism of the Will  :-)

Hi Steve,

Just to point out with regard to your recent interesting post.

1. The Gloomy Prospect is a term originally by Plomin and Daniels (1987). As Turkheimer (2000) notes:

Plomin and Daniels (1987) almost identified the answer to this question, but dismissed it as too pessimistic:

"One gloomy prospect is that the salient environment might be unsystematic, idiosyncratic, or serendipitous events such as accidents, illnesses, or other traumas . . . Such capricious events, however, are likely to prove a dead end for research. More interesting heuristically are possible systematic sources of differences between families. (p. 8)"

The gloomy prospect is true. Nonshared environmental variability predominates not because of the systematic effects of environmental events that are not shared among siblings, but rather because of the unsystematic effects of all environmental events…
-- But indeed, it is Turkheimer's paper that has made the term famous.

2. The Gloomy Prospect is predominately about the unsystematic 'nonshared environment', rather than about missing heritability. In the section you quote, he extends this notion to include unknown genetic factors, but it's not the "classic" use ;)

Two interesting papers by Omri, at his web page:

Tal O, 2013. Two Complementary Perspectives on Inter-Individual Genetic Distance. BioSystems. Volume 111, Issue 1, Pages 18–36

Tal O, 2012. Towards an Information-Theoretic Approach to Population Structure. Proceedings of Turing-100: The Alan Turing Centenary. p353-369

Wednesday, February 13, 2013

Eric, why so gloomy?

Eric Turkheimer wrote a blog post reacting to my comments (On the verge) about some recent intelligence GWAS results.

I'm an admirer of Eric's work in behavior genetics, as you can tell from this 2008 post The joy of Turkheimer. Since then we've gotten to know each other via the internet and have even met at a conference.

Eric is famous for (among other things) his Gloomy Prospect:
The question is not whether there are correlations to be found between individual genes and complex behavior— of course there are — but instead whether there are domains of genetic causation in which the gloomy prospect does not prevail, allowing the little bits of correlational evidence to cohere into replicable and cumulative genetic models of development. My own prediction is that such domains will prove rare indeed, and that the likelihood of discovering them will be inversely related to the complexity of the behavior under study.
He is right to be cautious about whether discovery of individual gene-trait associations will cohere into a satisfactory explanatory or predictive framework. It is plausible to me that the workings of the DNA program that creates a human brain are incredibly complex and beyond our detailed understanding for some time to come.

However, I am optimistic about the prediction problem. There are good reasons to think that the linear term in the model described below gives the dominant contribution to variation in cognitive ability:

The evidence comes from estimates of additive (linear) variance in twin and adoption studies, as well as from evolutionary theory itself. Fisher's Fundamental Theorem of Natural Selection identifies additive variance as the main driver of evolutionary change in the limit where selection timescales are much longer than recombination (e.g., due to sexual reproduction) timescales. Thus it is reasonable to expect that most of the change in genus Homo intelligence over the last millions of years is encoded in a linear genetic architecture.

GWAS, which identify causal loci and their effect sizes, are in fact fitting the parameters of the linear model that appears in the slide above. (Most effect sizes x_i will be zero, with perhaps 10k non-zero entries distributed according to some kind of power law.) Once we have characterized loci accounting for most of the variance, we will be able to predict phenotypes based only on genotype information (i.e., without further information about the individual). This is the genomic prediction problem which has already been partially solved for inbred lines of domesticated plants and animals. My guess is that it will be solved for humans once of order millions of genotype-phenotype pairs are available for analysis. Understanding the nonlinear parts will probably take much more data, but these are likely to be subleading effects.

Inside China’s Genome Factory

Another BGI profile, this time in MIT Technology Review.
Technology Review: ... In its scientific work, BGI often acts as the enabler of other people’s ideas. That is the case in a major project conceived by Steve Hsu, vice president for research at Michigan State University, to search for genes that influence intelligence. Under the guidance of Zhao Bowen, BGI is now sequencing the DNA of more than 2,000 people—mostly Americans—who have IQ scores of at least 160, or four standard deviations above the mean.

The DNA comes primarily from a collection of blood ­samples amassed by Robert Plomin, a psychologist at King’s College, London. The plan, to compare the genomes of geniuses and people of ordinary intelligence, is scientifically risky (it’s likely that thousands of genes are involved) and somewhat controversial. For those reasons it would be very hard to find the $15 or $20 million needed to carry out the project in the West. “Maybe it will work, maybe it won’t,” Plomin says. “But BGI is doing it basically for free.”

From Plomin’s perspective, BGI is so large that it appears to have more DNA sequencing capacity than it knows what to do with. It has “all those machines and people that have to be fed” with projects, he says. The IQ study isn’t the only mega-project under way. With a U.S. nonprofit, Autism Speaks, BGI is being paid to sequence the DNA of up to 10,000 people from families with autistic children. For researchers in Denmark, BGI is decoding the genomes of 3,000 obese people and 3,000 lean ones.

Beyond basic science, BGI has begun positioning itself as the engine of what’s expected to be a boom in the medical use of genome scans. In 2011, for instance, it agreed to install a DNA analysis center inside the Children’s Hospital of Philadelphia, a leading pediatric hospital. Ten bioinformatics experts were flown in from Shenzhen on temporary visas to create the center, which opened six months later with five sequencing machines.

As the technology enters clinical use, the number of genomes sequenced in their entirety could catapult into the millions per year. That is what both the Philadelphia hospital and BGI are preparing for. “They have the expertise, instruments, and economies of scale,” says Robert Doms, pathologist-in-chief of the children’s hospital. ...

Monday, February 11, 2013

On the verge

This paper is based on a combined sample of 18k individuals, taken from several European longitudinal studies of child development. Note early childhood intelligence is not as heritable as adult intelligence, according to classical (twins, adoption) methods.
Nature Molecular Psychiatry (29 January 2013) | doi:10.1038/mp.2012.184

Childhood intelligence is heritable, highly polygenic and associated with FNBP1L

Intelligence in childhood, as measured by psychometric cognitive tests, is a strong predictor of many important life outcomes, including educational attainment, income, health and lifespan. Results from twin, family and adoption studies are consistent with general intelligence being highly heritable and genetically stable throughout the life course. No robustly associated genetic loci or variants for childhood intelligence have been reported. Here, we report the first genome-wide association study (GWAS) on childhood intelligence (age range 6–18 years) from 17 989 individuals in six discovery and three replication samples. Although no individual single-nucleotide polymorphisms (SNPs) were detected with genome-wide significance, we show that the aggregate effects of common SNPs explain 22–46% of phenotypic variation in childhood intelligence in the three largest cohorts (P=3.9 × 10−15, 0.014 and 0.028). FNBP1L, previously reported to be the most significantly associated gene for adult intelligence, was also significantly associated with childhood intelligence (P=0.003). Polygenic prediction analyses resulted in a significant correlation between predictor and outcome in all replication cohorts. The proportion of childhood intelligence explained by the predictor reached 1.2% (P=6 × 10−5), 3.5% (P=10−3) and 0.5% (P=6 × 10−5) in three independent validation cohorts. Given the sample sizes, these genetic prediction results are consistent with expectations if the genetic architecture of childhood intelligence is like that of body mass index or height. Our study provides molecular support for the heritability and polygenic nature of childhood intelligence. Larger sample sizes will be required to detect individual variants with genome-wide significance.

Compare to this figure for height and BMI from an earlier post: Five years of GWAS discovery. It appears we may be close to the threshold required to find the first genome-wide significant hits.

Sunday, February 10, 2013

Quantum correspondence on black holes

Some Q&A from correspondence about my recent paper on quantum mechanics of black holes. See also Lubos Motl's blog post.
Q: Put another way (loosely following Bousso), consider two observers, Alice and Bob. They steer their rocket toward the black hole and into "the zone" or "the atmosphere". Then, Bob takes a lifeboat and escapes to asymptotic infinity while Alice falls in. I hope you agree that Bob and Alice's observations should agree up to the point where their paths diverge. On the other hand, it seems that Bob, by escaping to asymptotic infinity can check whether the evolution is unitary (or at least close to unitary). I wonder which parts of this you disagree with.

A: If Bob has the measurement capability to determine whether Psi_final is a unitary evolution of Psi_initial, he has to be able to overcome decoherence ("see all the branches"). As such, he cannot have the same experience as Alice of "going to the edge of the hole" -- that experience is specific to a certain subset of branches. (Note, Bob not only has to be able to see all the semiclassical branches, he also has to be able to detect small amplitude branches where some of the information leaked out of the hole, perhaps through nonlocality.) To me, this is the essential content of complementarity: the distinction between a super-observer (who can overcome decoherence) and ordinary observers who cannot. Super-observers do not (in the BH context) record a semiclassical spacetime, but rather a superposition of such (plus even more wacky low amplitude branches).

In the paragraph of my paper that directly addresses AMPS, I simply note that the "B" used in logical steps (1) and (2) are totally different objects. One is defined on a specific branch, the other is defined over all branches. Perhaps an AMPS believer can reformulate my compressed version of the firewall argument to avoid this objection, but I think the original argument has a problem.

Q: I think all one needs for the AMPS argument is that the entanglement entropy of the radiation decreases with the newly emitted quanta. This is, of course, a very tough quantum computation, but I don't see the obstruction to it being run on disparate semiclassical branches to use your language. I was imagining doing a projective measurement of the position of the black hole (which should be effectively equivalent to decoherence of the black hole's position); this still leaves an enormous Hilbert space of states associated with the black hole unprojected/cohered. I am not sure whether you are disagreeing with that last statement or not, but let me proceed. Then it seems we are free to run the AMPS argument. There is only a relatively small amount of entanglement between the remaining degrees of freedom and the black hole's position. Thus, unitarity (and Page style arguments) suggest a particular result for the quantum computation mentioned above by Bob on whichever branch of the wave function we are discussing (which seems to be effectively the same branch that Alice is on).

A: An observer who is subject to the decoherence that spatially localizes the hole would see S_AB to be much larger than S_A, where A are the (early, far) radiation modes and B are the near-horizon modes. This is because it takes enormous resources to detect the AB entanglement, whereas A looks maximally mixed. I think this is discussed rather explicitly in arXiv:1211.7033 -- one of Nomura's papers that he made me aware of after I posted my paper. Measurements related to unitarity, purity or entanglement of, e.g., AB modes, can only be implemented by what I call super-observers: they would see multiple BH spacetimes. Since at least some A and B modes move on the light cone, these operations may require non-local actions by Bob.

Q: Do you think there is an in-principle obstruction that prevents observers from overcoming decoherence? Is there some strict delineation between what can be put in a superposition and what cannot?

A: This is an open question in quantum foundations: i.e., at what point are there not enough resources in the entire visible universe to defeat decoherence -- at which point you have de facto wavefunction collapse. Omnes wrote a whole book arguing that once you have decoherence due to Avogadro's number of environmental DOF, the universe does not contain sufficient resources to detect the other branch. It does seem true to me that if one wants to make the BH paradox sharp, which requires that the mass of the BH be taken to infinity, then, yes, there is an in-principle gap between the two. The resources required grow exponentially with the BH entropy.

Thursday, February 07, 2013

"In the land of autistics, the aspie is king"

I've often thought this to myself, but was amused to hear it attributed to a famous theoretician at Princeton the other day.

Meeting of minds

Ron Unz meets Amy Chua at Yale Law School. Wish I could have been there :-)
The American Conservative: ... On Wednesday afternoon, I made an hour-long presentation at the Yale Law School, co-sponsored by the Asian-American Law Students Association and the Federalist Society, which drew a remarkable 100 students out of a total enrollment of around 600, filling one of the large lecture halls. In this instance, the research findings and proposals of my article were the central topic under examination, and the law students had many detailed and probing questions, producing a very useful discussion.

But for me, the true highlight of the visit came later that evening, when Amy Chua—of “Tiger Mom” fame—and her husband Jed Rubenfeld, both prominent law professors, hosted a small gathering at their home for faculty members and students, with the resulting discussion of the topics in my article and various other public policy issues continuing on for several hours. I’ve always greatly enjoyed such thoughtful discussions with extremely intelligent, knowledgeable people, and I afterward once again regretted my own late-1980s defection from the academic world. ...
Ron's (unpublished) essay on the evolution of Amy Chua. See also Tiger mothers and behavior genetics.

The figures of merit:

Sunday, February 03, 2013

Quantum mechanics of black holes

A paper from last summer by Almheiri, Marolf, Polchinski and Sully (AMPS) has stimulated a lot of new work on the black hole information problem. At the time I was only able to follow it superficially as I was busy with my new position at MSU. But finally I've had time to think about it more carefully -- see this paper.

Macroscopic superpositions and black hole unitarity

We discuss the black hole information problem, including the recent claim that unitarity requires a horizon firewall, emphasizing the role of decoherence and macroscopic superpositions. We consider the formation and evaporation of a large black hole as a quantum amplitude, and note that during intermediate stages (e.g., after the Page time), the amplitude is a superposition of macroscopically distinct (and decohered) spacetimes, with the black hole itself in different positions on different branches. Small but semiclassical observers (who are themselves part of the quantum amplitude) that fall into the hole on one branch will miss it entirely on other branches and instead reach future infinity. This observation can reconcile the subjective experience of an infalling observer with unitarity. We also discuss implications for the nice slice formulation of the information problem, and to complementarity.

Two good introductions to horizon firewalls and AMPS, by John Preskill and Joe Polchinski.

Earlier posts on this blog of related interest: here, here and here. From discussion at the third link (relevant, I claim, to AMPS):
Hawking claimed bh's could make a pure state evolve to a mixed state. But decoherence does this all the time, FAPP. To tell whether it is caused by the bh rather than decoherence, one needs to turn off (defeat) the latter. One has to go beyond FAPP!

FAPP = Bell's term = "For All Practical Purposes"
In the paper I cite a famous article by Bell, Against Measurement, which appeared in Physics World  in 1990, and which emphasizes the distinction between actual pure to mixed state evolution, and its apparent, or FAPP, counterpart (caused by decoherence). This distinction is central to an understanding of quantum foundations. The article can be a bit hard to find so I am including the link above.

Slides from an elementary lecture: black holes, entropy and information.

Blog Archive