Saturday, November 27, 2021

Social and Educational Mobility: Denmark vs USA (James Heckman)




Despite generous social programs such as free pre-K education, free college, and massive transfer payments, Denmark is similar to the US in key measures of inequality, such as educational outcomes and cognitive test scores. 

While transfer payments can equalize, to some degree, disposable income, they do not seem to be able to compensate for large family effects on individual differences in development. 

These observations raise the following questions: 

1. What is the best case scenario for the US if all progressive government programs are implemented with respect to child development, free high quality K12 education, free college, etc.?

2. What is the causal mechanism for stubborn inequality of outcomes, transmitted from parent to child (i.e., within families)? 

Re #2: Heckman and collaborators focus on environmental factors, but do not (as far as I can tell) discuss genetic transmission. We already know that polygenic scores are correlated to the education and income levels of parents, and (from adoption studies) that children tend to resemble their biological parents much more strongly than their adoptive parents. These results suggest that genetic transmission of inequality may dominate environmental transmission.
  
See 



The Contribution of Cognitive and Noncognitive Skills to Intergenerational Social Mobility (McGue et al. 2020)


Note: Denmark is very homogenous in ancestry, and the data presented in these studies (e.g., polygenic scores and social mobility) are also drawn from European-ancestry cohorts. The focus here is not on ethnicity or group differences between ancestry groups. The focus is on social and educational mobility within European-ancestry populations, with or without generous government programs supporting free college education, daycare, pre-K, etc.

Lessons for Americans from Denmark about inequality and social mobility 
James Heckman and Rasmus Landersø 
Abstract Many progressive American policy analysts point to Denmark as a model welfare state with low levels of income inequality and high levels of income mobility across generations. It has in place many social policies now advocated for adoption in the U.S. Despite generous Danish social policies, family influence on important child outcomes in Denmark is about as strong as it is in the United States. More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects. Powerful forces not easily mitigated by Danish-style welfare state programs operate in both countries.
Also discussed in this episode of EconTalk podcast. Russ does not ask the obvious question about disentangling family environment from genetic transmission of inequality.
 

The figure below appears in Game Over: Genomic Prediction of Social Mobility. It shows SNP-based polygenic score and life outcome (socioeconomic index, on vertical axis) in four longitudinal cohorts, one from New Zealand (Dunedin) and three from the US. Each cohort (varying somewhat in size) has thousands of individuals, ~20k in total (all of European ancestry). The points displayed are averages over bins containing 10-50 individuals. For each cohort, the individuals have been grouped by childhood (family) social economic status. Social mobility can be predicted from polygenic score. Note that higher SES families tend to have higher polygenic scores on average -- which is what one might expect from a society that is at least somewhat meritocratic. The cohorts have not been used in training -- this is true out-of-sample validation. Furthermore, the four cohorts represent different geographic regions (even, different continents) and individuals born in different decades.




The figure below appears in More on SES and IQ.

Where is the evidence for environmental effects described above in Heckman's abstract: "More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects"? Do parents not seek these advantages for their adopted children as well as for their biological children? Or is there an entirely different causal mechanism based on shared DNA?

 


 

Sunday, November 14, 2021

Has Hawking's Black Hole Information Paradox Been Resolved?



In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
https://arxiv.org/abs/hep-th/9209058 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.


These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 


BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.


3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results described at bottom, it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 


This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
https://arxiv.org/abs/2012.05770 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 

The discussion below suggests that each internal BH state described by the coefficients { c_n } results in a different final radiation state -- i.e., the process can be unitary.





Note Added

In the comments David asks about the results described in this 2020 Quanta article The Most Famous Paradox in Physics Nears Its End

I thought about discussing those results in the post, but 1. it was already long, and 2. they are using a very different AdS approach. 

However, Raju does discuss these papers in his review. 

Most of the theorists in the Quanta article accept the basic formulation of the information paradox, so it's surprising to them that they see indications of unitary black hole evaporation. As I mentioned in the post I don't think the paradox itself is well-established, so I am not surprised. 

I think that the quantum hair results are important because they show explicitly that the internal state of the hole affects the quantum state of the graviton field, which then influences the Hawking radiation production. 

It was pointed out by Papadodimos and Raju, and also in my 2013 paper arXiv:1308.5686, that tiny correlations in the radiation density matrix could purify it. That is, the Hawking density matrix plus exp(-S) corrections (which everyone expects are there) could result from a pure state in the large Hilbert space Y, which has dimensionality ~ exp(+S). This is related to what I wrote in the post: start with a pure state in Y and trace over the complement of X. The resulting ρ(X) is exponentially close to thermal (maximally mixed) even though it came from a pure state.

Wednesday, November 10, 2021

Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity (published version)

This is the published version of our recent paper. See previous discussion of the arXiv preprint: Finitism and Physics.
Physics Letters B Volume 823, 10 December 2021, 136763 
Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity 
Xavier Calmet and Stephen D.H. Hsu 
https://doi.org/10.1016/j.physletb.2021.136763 
Abstract 
We show that the precision of an angular measurement or rotation (e.g., on the orientation of a qubit or spin state) is limited by fundamental constraints arising from quantum mechanics and general relativity (gravitational collapse). The limiting precision is 1/r in Planck units, where r is the physical extent of the (possibly macroscopic) device used to manipulate the spin state. This fundamental limitation means that spin states cannot be experimentally distinguished from each other if they differ by a sufficiently small rotation. Experiments cannot exclude the possibility that the space of quantum state vectors (i.e., Hilbert space) is fundamentally discrete, rather than continuous. We discuss the implications for finitism: does physics require infinity or a continuum?

In the revision we edited the second paragraph below to clarify the history regarding Hilbert's program, Gödel, and the status of the continuum in analysis. The continuum was quite controversial at the time and was one of the primary motivations for Hilbert's axiomatization. There is a kind of modern middle-brow view that epsilon and delta proofs are sufficient to resolve the question of rigor in analysis, but this ignores far more fundamental problems that forced Hilbert, von Neumann, Weyl, etc. to resort to logic and set theory.

In the early 20th century little was known about neuroscience (i.e., our finite brains made of atoms), and it had not been appreciated that the laws of physics themselves might contain internal constraints that prevent any experimental test of infinitely continuous structures. Hence we can understand Weyl's appeal to human intuition as a basis for the mathematical continuum (Platonism informed by Nature; Gödel was also a kind of Platonist), even if today it appears implausible. Now we suspect that our minds are simply finite machines and nothing more, and that Nature itself does not require a continuum -- i.e., it can be simulated perfectly well with finitary processes.

It may come as a surprise to physicists that infinity and the continuum are even today the subject of debate in mathematics and the philosophy of mathematics. Some mathematicians, called finitists, accept only finite mathematical objects and procedures [30]. The fact that physics does not require infinity or a continuum is an important empirical input to the debate over finitism. For example, a finitist might assert (contra the Platonist perspective adopted by many mathematicians) that human brains built from finite arrangements of atoms, and operating under natural laws (physics) that are finitistic, are unlikely to have trustworthy intuitions concerning abstract concepts such as the continuum. These facts about the brain and about physical laws stand in contrast to intuitive assumptions adopted by many mathematicians. For example, Weyl (Das Kontinuum [26], [27]) argues that our intuitions concerning the continuum originate in the mind's perception of the continuity of space-time.
There was a concerted effort beginning in the 20th century to place infinity and the continuum on a rigorous foundation using logic and set theory. As demonstrated by Gödel, Hilbert's program of axiomatization using finitary methods (originally motivated, in part, by the continuum in analysis) could not succeed. Opinions are divided on modern approaches which are non-finitary. For example, the standard axioms of Zermelo-Fraenkel (ZFC) set theory applied to infinite sets lead to many counterintuitive results such as the Banach-Tarski Paradox: given any two solid objects, the cut pieces of either one can be reassembled into the other [28]. When examined closely all of the axioms of ZFC (e.g., Axiom of Choice) are intuitively obvious if applied to finite sets, with the exception of the Axiom of Infinity, which admits infinite sets. (Infinite sets are inexhaustible, so application of the Axiom of Choice leads to pathological results.) The Continuum Hypothesis, which proposes that there is no cardinality strictly between that of the integers and reals, has been shown to be independent (neither provable nor disprovable) in ZFC [29]. Finitists assert that this illustrates how little control rigorous mathematics has on even the most fundamental properties of the continuum.
Weyl was never satisfied that the continuum and classical analysis had been placed on a solid foundation.
Das Kontinuum (Stanford Encyclopedia of Philosophy)
Another mathematical “possible” to which Weyl gave a great deal of thought is the continuum. During the period 1918–1921 he wrestled with the problem of providing the mathematical continuum—the real number line—with a logically sound formulation. Weyl had become increasingly critical of the principles underlying the set-theoretic construction of the mathematical continuum. He had come to believe that the whole set-theoretical approach involved vicious circles[11] to such an extent that, as he says, “every cell (so to speak) of this mighty organism is permeated by contradiction.” In Das Kontinuum he tries to overcome this by providing analysis with a predicative formulation—not, as Russell and Whitehead had attempted, by introducing a hierarchy of logically ramified types, which Weyl seems to have regarded as excessively complicated—but rather by confining the comprehension principle to formulas whose bound variables range over just the initial given entities (numbers). Accordingly he restricts analysis to what can be done in terms of natural numbers with the aid of three basic logical operations, together with the operation of substitution and the process of “iteration”, i.e., primitive recursion. Weyl recognized that the effect of this restriction would be to render unprovable many of the central results of classical analysis—e.g., Dirichlet’s principle that any bounded set of real numbers has a least upper bound[12]—but he was prepared to accept this as part of the price that must be paid for the security of mathematics.
As Weyl saw it, there is an unbridgeable gap between intuitively given continua (e.g. those of space, time and motion) on the one hand, and the “discrete” exact concepts of mathematics (e.g. that of natural number[13]) on the other. The presence of this chasm meant that the construction of the mathematical continuum could not simply be “read off” from intuition. It followed, in Weyl’s view, that the mathematical continuum must be treated as if it were an element of the transcendent realm, and so, in the end, justified in the same way as a physical theory. It was not enough that the mathematical theory be consistent; it must also be reasonable.
Das Kontinuum embodies Weyl’s attempt at formulating a theory of the continuum which satisfies the first, and, as far as possible, the second, of these requirements. In the following passages from this work he acknowledges the difficulty of the task:
… the conceptual world of mathematics is so foreign to what the intuitive continuum presents to us that the demand for coincidence between the two must be dismissed as absurd. (Weyl 1987, 108)
… the continuity given to us immediately by intuition (in the flow of time and of motion) has yet to be grasped mathematically as a totality of discrete “stages” in accordance with that part of its content which can be conceptualized in an exact way. 
See also The History of the Planck Length and the Madness of Crowds.

Tuesday, November 09, 2021

The Balance of Power in the Western Pacific and the Death of the Naval Surface Ship

Recent satellite photos suggest that PLARF (People's Liberation Army Rocket Forces) have been testing against realistic moving ship targets in the deserts of the northwest. Note the ship model is on rails in the second photo below. Apparently there are over 30km of rail lines, allowing the simulation of evasive maneuvers by an aircraft carrier (third figure below).


Large surface ships such as aircraft carriers are easy to detect (e.g., satellite imaging via radar sensors), and missiles (especially those with maneuver capability) are very difficult to stop. Advances in AI / machine learning tend to favor missile targeting, not defense of carriers. 

The key capability is autonomous final target acquisition by the missile at a range of tens of km -- i.e., the distance the ship can move during missile flight time after launch. State of the art air to air missiles already do this in BVR (Beyond Visual Range) combat. Note, they are much smaller than anti-ship missiles, with presumably much smaller radar seekers, yet are pursuing a smaller, faster, more maneuverable target (enemy aircraft). 

It seems highly likely that the technical problem of autonomous targeting of a large surface ship during final missile approach has already been solved some time ago by the PLARF. 

With this capability in place one only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, etc. Obviously if the satellite retains coverage of the ship during the entire attack, and can communicate with the missile, even this smart final targeting is not required.

This is what a ship looks like to Synthetic Aperture Radar (SAR) from Low Earth Orbit (LEO).  PRC has had a sophisticated system (Yaogan) in place for almost a decade, and continues to launch new satellites for this purpose.



See LEO SAR, hypersonics, and the death of the naval surface ship:

In an earlier post we described how sea blockade (e.g., against Japan or Taiwan) can be implemented using satellite imaging and missiles, drones, AI/ML. Blue water naval dominance is not required. 
PLAN/PLARF can track every container ship and oil tanker as they approach Kaohsiung or Nagoya. All are in missile range -- sitting ducks. Naval convoys will be just as vulnerable. 
Sink one tanker or cargo ship, or just issue a strong warning, and no shipping company in the world will be stupid enough to try to run the blockade. 

But, But, But, !?! ...
USN guy: We'll just hide the carrier from the satellite and missile seekers using, you know, countermeasures! [Aside: don't cut my carrier budget!] 
USAF guy: Uh, the much smaller AESA/IR seeker on their AAM can easily detect an aircraft from much longer ranges. How will you hide a huge ship? 
USN guy: We'll just shoot down the maneuvering hypersonic missile using, you know, methods. [Aside: don't cut my carrier budget!] 
Missile defense guy: Can you explain to us how to do that? If the incoming missile maneuvers we have to adapt the interceptor trajectory (in real time) to where we project the missile to be after some delay. But we can't know its trajectory ahead of time, unlike for a ballistic (non-maneuvering) warhead.
More photos and maps in this 2017 post.

Monday, November 01, 2021

Preimplantation Genetic Testing for Aneuploidy: New Methods and Higher Pregnancy Rates


[ NOTE ADDED NOVEMBER 12 2021: Research seminar videos from ASRM are embargoed until 12/31. So this video will not be available until then. ]

This talk describes a study of PGT-A (Preimplantation Genetic Testing - Aneuploidy, i.e., testing for chromosomal normality) using 2 different methods: NGS vs the new SNP array platform (LifeView) developed by my startup Genomic Prediction. 

The SNP array platform allows very accurate genotyping of each embryo at ~1 million locations in the genome, and the subsequent bioinformatic analysis produces a much more accurate prediction of chromosomal normality than the older methods. 

Millions of embryos are screened each year using PGT-A, about 60% of all IVF embryos in the US. 

Klaus Wiemer is the laborator director for POMA fertility near Seattle. He conducted this study independently, without informing Genomic Prediction. There are ~3000 embryos in the dataset, all biopsied at POMA and samples allocated to three testing labs A,B,C using the two different methods. The family demographics (e.g., maternal age) were similar in all three groups. Lab B is Genomic Prediction and A,C are two of the largest IVF testing labs in the world, using NGS.

The results imply lower false-positive rates, lower false-negative rates, and higher accuracy overall from our methods. These lead to a significantly higher pregnancy success rate.

The new technology has the potential to help millions of families all over the world.
 
Comparison of Outcomes from Concurrent Use of 3 Different PGT-A Laboratories 
Oct 18 2021 annual meeting of the American Society for Reproductive Medicine (ASRM) 
Klaus Wiemer, PhD

While Down Syndrome population incidence (i.e., in babies born) is only ~1 percent, the incidence of aneuploidy in embryos is much higher. Aneuploidy is more likely to result in a failed pregnancy than in the birth of a Downs baby -- e.g., because the embryo fails to implant, or does not develop properly during the pregnancy. 

False positives mean fewer healthy embryos available for transfer, while false negatives mean that problematic embryos are transferred. Both of these screening accuracies affect the overall pregnancy success rate.