Saturday, November 27, 2021

Social and Educational Mobility: Denmark vs USA (James Heckman)

Despite generous social programs such as free pre-K education, free college, and massive transfer payments, Denmark is similar to the US in key measures of inequality, such as educational outcomes and cognitive test scores. 

While transfer payments can equalize, to some degree, disposable income, they do not seem to be able to compensate for large family effects on individual differences in development. 

These observations raise the following questions: 

1. What is the best case scenario for the US if all progressive government programs are implemented with respect to child development, free high quality K12 education, free college, etc.?

2. What is the causal mechanism for stubborn inequality of outcomes, that is transmitted within families? 

Re #2: Heckman and collaborators focus on environmental factors, but do not (as far as I can tell) discuss genetic transmission. We already know that polygenic scores are correlated to the education and income levels of parents, and (from adoption studies) that children tend to resemble their biological parents much more strongly than their adoptive parents. These results suggest that genetic transmission of inequality may dominate environmental transmission.

Note: Denmark is very homogenous in ancestry, and the data presented in these studies (e.g., polygenic scores and social mobility) are also drawn from European-ancestry cohorts. The focus here is not on ethnicity or group differences between ancestry groups. The focus is on social and educational mobility within European-ancestry populations, with or without generous government programs supporting free college education, daycare, pre-K, etc.

Lessons for Americans from Denmark about inequality and social mobility 
James Heckman and Rasmus Landersø 
Abstract Many progressive American policy analysts point to Denmark as a model welfare state with low levels of income inequality and high levels of income mobility across generations. It has in place many social policies now advocated for adoption in the U.S. Despite generous Danish social policies, family influence on important child outcomes in Denmark is about as strong as it is in the United States. More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects. Powerful forces not easily mitigated by Danish-style welfare state programs operate in both countries.
Also discussed in this episode of EconTalk podcast. Russ does not ask the obvious question about disentangling family environment from genetic transmission of inequality.

The figure below appears in Game Over: Genomic Prediction of Social Mobility. It shows SNP-based polygenic score and life outcome (socioeconomic index, on vertical axis) in four longitudinal cohorts, one from New Zealand (Dunedin) and three from the US. Each cohort (varying somewhat in size) has thousands of individuals, ~20k in total (all of European ancestry). The points displayed are averages over bins containing 10-50 individuals. For each cohort, the individuals have been grouped by childhood (family) social economic status. Social mobility can be predicted from polygenic score. Note that higher SES families tend to have higher polygenic scores on average -- which is what one might expect from a society that is at least somewhat meritocratic. The cohorts have not been used in training -- this is true out-of-sample validation. Furthermore, the four cohorts represent different geographic regions (even, different continents) and individuals born in different decades.

The figure below appears in More on SES and IQ.

Where is the evidence for environmental effects described above in Heckman's abstract: "More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects"? Do parents not seek these advantages for their adopted children as well as for their biological children? Or is there an entirely different causal mechanism based on shared DNA?



Sunday, November 14, 2021

Has Hawking's Black Hole Information Paradox Been Resolved?

In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.

These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 

BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.

3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results described at bottom, it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 

This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 

The discussion below suggests that each internal BH state described by the coefficients { c_n } results in a different final radiation state -- i.e., the process can be unitary.

Note Added

In the comments David asks about the results described in this 2020 Quanta article The Most Famous Paradox in Physics Nears Its End

I thought about discussing those results in the post, but 1. it was already long, and 2. they are using a very different AdS approach. 

However, Raju does discuss these papers in his review. 

Most of the theorists in the Quanta article accept the basic formulation of the information paradox, so it's surprising to them that they see indications of unitary black hole evaporation. As I mentioned in the post I don't think the paradox itself is well-established, so I am not surprised. 

I think that the quantum hair results are important because they show explicitly that the internal state of the hole affects the quantum state of the graviton field, which then influences the Hawking radiation production. 

It was pointed out by Papadodimos and Raju, and also in my 2013 paper arXiv:1308.5686, that tiny correlations in the radiation density matrix could purify it. That is, the Hawking density matrix plus exp(-S) corrections (which everyone expects are there) could result from a pure state in the large Hilbert space Y, which has dimensionality ~ exp(+S). This is related to what I wrote in the post: start with a pure state in Y and trace over the complement of X. The resulting ρ(X) is exponentially close to thermal (maximally mixed) even though it came from a pure state.

Wednesday, November 10, 2021

Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity (published version)

This is the published version of our recent paper. See previous discussion of the arXiv preprint: Finitism and Physics.
Physics Letters B Volume 823, 10 December 2021, 136763 
Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity 
Xavier Calmet and Stephen D.H. Hsu 
We show that the precision of an angular measurement or rotation (e.g., on the orientation of a qubit or spin state) is limited by fundamental constraints arising from quantum mechanics and general relativity (gravitational collapse). The limiting precision is 1/r in Planck units, where r is the physical extent of the (possibly macroscopic) device used to manipulate the spin state. This fundamental limitation means that spin states cannot be experimentally distinguished from each other if they differ by a sufficiently small rotation. Experiments cannot exclude the possibility that the space of quantum state vectors (i.e., Hilbert space) is fundamentally discrete, rather than continuous. We discuss the implications for finitism: does physics require infinity or a continuum?

In the revision we edited the second paragraph below to clarify the history regarding Hilbert's program, Gödel, and the status of the continuum in analysis. The continuum was quite controversial at the time and was one of the primary motivations for Hilbert's axiomatization. There is a kind of modern middle-brow view that epsilon and delta proofs are sufficient to resolve the question of rigor in analysis, but this ignores far more fundamental problems that forced Hilbert, von Neumann, Weyl, etc. to resort to logic and set theory.

In the early 20th century little was known about neuroscience (i.e., our finite brains made of atoms), and it had not been appreciated that the laws of physics themselves might contain internal constraints that prevent any experimental test of infinitely continuous structures. Hence we can understand Weyl's appeal to human intuition as a basis for the mathematical continuum (Platonism informed by Nature; Gödel was also a kind of Platonist), even if today it appears implausible. Now we suspect that our minds are simply finite machines and nothing more, and that Nature itself does not require a continuum -- i.e., it can be simulated perfectly well with finitary processes.

It may come as a surprise to physicists that infinity and the continuum are even today the subject of debate in mathematics and the philosophy of mathematics. Some mathematicians, called finitists, accept only finite mathematical objects and procedures [30]. The fact that physics does not require infinity or a continuum is an important empirical input to the debate over finitism. For example, a finitist might assert (contra the Platonist perspective adopted by many mathematicians) that human brains built from finite arrangements of atoms, and operating under natural laws (physics) that are finitistic, are unlikely to have trustworthy intuitions concerning abstract concepts such as the continuum. These facts about the brain and about physical laws stand in contrast to intuitive assumptions adopted by many mathematicians. For example, Weyl (Das Kontinuum [26], [27]) argues that our intuitions concerning the continuum originate in the mind's perception of the continuity of space-time.
There was a concerted effort beginning in the 20th century to place infinity and the continuum on a rigorous foundation using logic and set theory. As demonstrated by Gödel, Hilbert's program of axiomatization using finitary methods (originally motivated, in part, by the continuum in analysis) could not succeed. Opinions are divided on modern approaches which are non-finitary. For example, the standard axioms of Zermelo-Fraenkel (ZFC) set theory applied to infinite sets lead to many counterintuitive results such as the Banach-Tarski Paradox: given any two solid objects, the cut pieces of either one can be reassembled into the other [28]. When examined closely all of the axioms of ZFC (e.g., Axiom of Choice) are intuitively obvious if applied to finite sets, with the exception of the Axiom of Infinity, which admits infinite sets. (Infinite sets are inexhaustible, so application of the Axiom of Choice leads to pathological results.) The Continuum Hypothesis, which proposes that there is no cardinality strictly between that of the integers and reals, has been shown to be independent (neither provable nor disprovable) in ZFC [29]. Finitists assert that this illustrates how little control rigorous mathematics has on even the most fundamental properties of the continuum.
Weyl was never satisfied that the continuum and classical analysis had been placed on a solid foundation.
Das Kontinuum (Stanford Encyclopedia of Philosophy)
Another mathematical “possible” to which Weyl gave a great deal of thought is the continuum. During the period 1918–1921 he wrestled with the problem of providing the mathematical continuum—the real number line—with a logically sound formulation. Weyl had become increasingly critical of the principles underlying the set-theoretic construction of the mathematical continuum. He had come to believe that the whole set-theoretical approach involved vicious circles[11] to such an extent that, as he says, “every cell (so to speak) of this mighty organism is permeated by contradiction.” In Das Kontinuum he tries to overcome this by providing analysis with a predicative formulation—not, as Russell and Whitehead had attempted, by introducing a hierarchy of logically ramified types, which Weyl seems to have regarded as excessively complicated—but rather by confining the comprehension principle to formulas whose bound variables range over just the initial given entities (numbers). Accordingly he restricts analysis to what can be done in terms of natural numbers with the aid of three basic logical operations, together with the operation of substitution and the process of “iteration”, i.e., primitive recursion. Weyl recognized that the effect of this restriction would be to render unprovable many of the central results of classical analysis—e.g., Dirichlet’s principle that any bounded set of real numbers has a least upper bound[12]—but he was prepared to accept this as part of the price that must be paid for the security of mathematics.
As Weyl saw it, there is an unbridgeable gap between intuitively given continua (e.g. those of space, time and motion) on the one hand, and the “discrete” exact concepts of mathematics (e.g. that of natural number[13]) on the other. The presence of this chasm meant that the construction of the mathematical continuum could not simply be “read off” from intuition. It followed, in Weyl’s view, that the mathematical continuum must be treated as if it were an element of the transcendent realm, and so, in the end, justified in the same way as a physical theory. It was not enough that the mathematical theory be consistent; it must also be reasonable.
Das Kontinuum embodies Weyl’s attempt at formulating a theory of the continuum which satisfies the first, and, as far as possible, the second, of these requirements. In the following passages from this work he acknowledges the difficulty of the task:
… the conceptual world of mathematics is so foreign to what the intuitive continuum presents to us that the demand for coincidence between the two must be dismissed as absurd. (Weyl 1987, 108)
… the continuity given to us immediately by intuition (in the flow of time and of motion) has yet to be grasped mathematically as a totality of discrete “stages” in accordance with that part of its content which can be conceptualized in an exact way. 
See also The History of the Planck Length and the Madness of Crowds.

Tuesday, November 09, 2021

The Balance of Power in the Western Pacific and the Death of the Naval Surface Ship

Recent satellite photos suggest that PLARF (People's Liberation Army Rocket Forces) have been testing against realistic moving ship targets in the deserts of the northwest. Note the ship model is on rails in the second photo below. Apparently there are over 30km of rail lines, allowing the simulation of evasive maneuvers by an aircraft carrier (third figure below).

Large surface ships such as aircraft carriers are easy to detect (e.g., satellite imaging via radar sensors), and missiles (especially those with maneuver capability) are very difficult to stop. Advances in AI / machine learning tend to favor missile targeting, not defense of carriers. 

The key capability is autonomous final target acquisition by the missile at a range of tens of km -- i.e., the distance the ship can move during missile flight time after launch. State of the art air to air missiles already do this in BVR (Beyond Visual Range) combat. Note, they are much smaller than anti-ship missiles, with presumably much smaller radar seekers, yet are pursuing a smaller, faster, more maneuverable target (enemy aircraft). 

It seems highly likely that the technical problem of autonomous targeting of a large surface ship during final missile approach has already been solved some time ago by the PLARF. 

With this capability in place one only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, etc. Obviously if the satellite retains coverage of the ship during the entire attack, and can communicate with the missile, even this smart final targeting is not required.

This is what a ship looks like to Synthetic Aperture Radar (SAR) from Low Earth Orbit (LEO).  PRC has had a sophisticated system (Yaogan) in place for almost a decade, and continues to launch new satellites for this purpose.

See LEO SAR, hypersonics, and the death of the naval surface ship:

In an earlier post we described how sea blockade (e.g., against Japan or Taiwan) can be implemented using satellite imaging and missiles, drones, AI/ML. Blue water naval dominance is not required. 
PLAN/PLARF can track every container ship and oil tanker as they approach Kaohsiung or Nagoya. All are in missile range -- sitting ducks. Naval convoys will be just as vulnerable. 
Sink one tanker or cargo ship, or just issue a strong warning, and no shipping company in the world will be stupid enough to try to run the blockade. 

But, But, But, !?! ...
USN guy: We'll just hide the carrier from the satellite and missile seekers using, you know, countermeasures! [Aside: don't cut my carrier budget!] 
USAF guy: Uh, the much smaller AESA/IR seeker on their AAM can easily detect an aircraft from much longer ranges. How will you hide a huge ship? 
USN guy: We'll just shoot down the maneuvering hypersonic missile using, you know, methods. [Aside: don't cut my carrier budget!] 
Missile defense guy: Can you explain to us how to do that? If the incoming missile maneuvers we have to adapt the interceptor trajectory (in real time) to where we project the missile to be after some delay. But we can't know its trajectory ahead of time, unlike for a ballistic (non-maneuvering) warhead.
More photos and maps in this 2017 post.

Monday, November 01, 2021

Preimplantation Genetic Testing for Aneuploidy: New Methods and Higher Pregnancy Rates

[ NOTE ADDED NOVEMBER 12 2021: Research seminar videos from ASRM are embargoed until 12/31. So this video will not be available until then. ]

This talk describes a study of PGT-A (Preimplantation Genetic Testing - Aneuploidy, i.e., testing for chromosomal normality) using 2 different methods: NGS vs the new SNP array platform (LifeView) developed by my startup Genomic Prediction. 

The SNP array platform allows very accurate genotyping of each embryo at ~1 million locations in the genome, and the subsequent bioinformatic analysis produces a much more accurate prediction of chromosomal normality than the older methods. 

Millions of embryos are screened each year using PGT-A, about 60% of all IVF embryos in the US. 

Klaus Wiemer is the laborator director for POMA fertility near Seattle. He conducted this study independently, without informing Genomic Prediction. There are ~3000 embryos in the dataset, all biopsied at POMA and samples allocated to three testing labs A,B,C using the two different methods. The family demographics (e.g., maternal age) were similar in all three groups. Lab B is Genomic Prediction and A,C are two of the largest IVF testing labs in the world, using NGS.

The results imply lower false-positive rates, lower false-negative rates, and higher accuracy overall from our methods. These lead to a significantly higher pregnancy success rate.

The new technology has the potential to help millions of families all over the world.
Comparison of Outcomes from Concurrent Use of 3 Different PGT-A Laboratories 
Oct 18 2021 annual meeting of the American Society for Reproductive Medicine (ASRM) 
Klaus Wiemer, PhD

While Down Syndrome population incidence (i.e., in babies born) is only ~1 percent, the incidence of aneuploidy in embryos is much higher. Aneuploidy is more likely to result in a failed pregnancy than in the birth of a Downs baby -- e.g., because the embryo fails to implant, or does not develop properly during the pregnancy. 

False positives mean fewer healthy embryos available for transfer, while false negatives mean that problematic embryos are transferred. Both of these screening accuracies affect the overall pregnancy success rate.

Sunday, October 31, 2021

Demis Hassabis: Using AI to accelerate scientific discovery (protein folding) + Bonus: Bruno Pontecorvo


Recent talk (October 2021) by Demis Hassabis on the use of AI in scientific research. Second half of the talk is focused on protein folding. 

Below is part 2, by the AlphaFold research lead, which has more technical details.

Bonus: My former Oregon colleague David Strom recommended a CERN lecture by Frank Close on his biography of physicist (and atomic spy?) Bruno Pontecorvo.  David knew that The Battle of Algiers, which I blogged about recently, was directed by Gillo Pontecorvo, Bruno's brother.

Below is the closest thing I could find on YouTube -- it has better audio and video quality than the CERN talk. 

The amazing story of Bruno Pontecorvo involves topics such as the first nuclear reactions and reactors (work with Enrico Fermi), the Manhattan Project, neutrino flavors and oscillations, supernovae, atomic espionage, the KGB, Kim Philby, and the quote: 
I want to be remembered as a great physicist, not as your fucking spy!

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)

Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Thursday, October 28, 2021

The Night Porter (1974)

I first watched The Night Porter while in graduate school, and came across it again last weekend as a byproduct of ordering HBOMax to see the new Dune movie. There are quite a few film classics buried below the top level HBOMax recommendation engine -- you just have to search a bit. See also here on Kanopy. 

Opinions of the film vary widely. In my view it's a masterpiece: the performances by Charlotte Rampling and Dirk Bogarde are incredible, although I must say that I find the film very difficult to watch. 

Rampling portrays the Bene Gesserit Reverend Mother Gaius Helen Mohiam in the new Dune -- see director Denis Villeneuve's analysis of her Gom Jabbar scene.  

I've always wondered about the origins of The Night Porter and how it got made. The material is sensationalistic, even borders on exploitation, but the treatment has psychological and cinematic depth. 

This video contains remarkable interviews with the director Liliana Cavani, writer Italo Moscati, and Rampling. Short clips are interspersed with the interviews so you can get a sense of the film if you've never seen it. Unfortunately, these clips caused the video to be age restricted on YouTube so you have to click through and log in to your Google user account to view it.

Bogarde is not interviewed in the video, but his Wikipedia bio notes that
Bogarde was one of the first Allied officers in April 1945 to reach the Bergen-Belsen concentration camp in Germany, an experience that had the most profound effect on him and about which he found it difficult to speak for many years afterward.[6] [ Video of the interview below.]
"I think it was on the 13th of April—I'm not quite sure what the date was" ... "when we opened up Belsen Camp, which was the first concentration camp any of us had seen, we didn't even know what they were, we'd heard vague rumours that they were. I mean nothing could be worse than that. The gates were opened and then I realised that I was looking at Dante's Inferno, I mean ... I ... I still haven't seen anything as dreadful. And never will. And a girl came up who spoke English, because she recognised one of the badges, and she ... her breasts were like, sort of, empty purses, she had no top on, and a pair of man's pyjamas, you know, the prison pyjamas, and no hair. But I knew she was a girl because of her breasts, which were empty. She was I suppose, oh I don't know, twenty four, twenty five, and we talked, and she was, you know, so excited and thrilled, and all around us there were mountains of dead people, I mean mountains of them, and they were slushy, and they were slimy, so when you walked through them ... or walked—you tried not to, but it was like .... well you just walked through them. 
... there was a very nice British MP [Royal Military Police], and he said 'Don't have any more, come away, come away sir, if you don't mind, because they've all got typhoid and you'll get it, you shouldn't be here swanning-around' and she saw in the back of the jeep, the unexpired portion of the daily ration, wrapped in a piece of the Daily Mirror, and she said could she have it, and he" [the Military Police] "said 'Don't give her food, because they eat it immediately and they die, within ten minutes', but she didn't want the food, she wanted the piece of Daily Mirror—she hadn't seen newsprint for about eight years or five years, whatever it was she had been in the camp for. ... she was Estonian. ... that's all she wanted. She gave me a big kiss, which was very moving. The corporal" [Military Police] "was out of his mind and I was just dragged off. I never saw her again, of course she died. I mean, I gather they all did. But, I can't really describe it very well, I don't really want to. I went through some of the huts and there were tiers and tiers of rotting people, but some of them who were alive underneath the rot, and were lifting their heads and trying .... trying to do the victory thing. That was the worst."[4]
In her interview Rampling notes that it was Bogarde who insisted that she be given the role of Lucia.


Friday, October 22, 2021

The Principles of Deep Learning Theory - Dan Roberts IAS talk


This is a nice talk that discusses, among other things, subleading 1/width corrections to the infinite width limit of neural networks. I was expecting someone would work out these corrections when I wrote the post on NTK and large width limit at the link below. Apparently, the infinite width limit does not capture the behavior of realistic neural nets and it is only at the first nontrivial order in the expansion that the desired properties emerge. Roberts claims that when the depth to width ratio r is small but nonzero one can characterize network dynamics in a controlled expansion, whereas when r > 1 it becomes a problem of strong dynamics. 

The talk is based on the book
The Principles of Deep Learning Theory 
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.
Dan Roberts web page

This essay looks interesting:
Why is AI hard and Physics simple? 
We discuss why AI is hard and why physics is simple. We discuss how physical intuition and the approach of theoretical physics can be brought to bear on the field of artificial intelligence and specifically machine learning. We suggest that the underlying project of machine learning and the underlying project of physics are strongly coupled through the principle of sparsity, and we call upon theoretical physicists to work on AI as physicists. As a first step in that direction, we discuss an upcoming book on the principles of deep learning theory that attempts to realize this approach.

May 2021 post: Neural Tangent Kernels and Theoretical Foundations of Deep Learning
Large width seems to provide a limiting case (analogous to the large-N limit in gauge theory) in which rigorous results about deep learning can be proved. ... 
The overparametrized (width ~ w^2) network starts in a random state and by concentration of measure this initial kernel K is just the expectation, which is the NTK. Because of the large number of parameters the effect of training (i.e., gradient descent) on any individual parameter is 1/w, and the change in the eigenvalue spectrum of K is also 1/w. It can be shown that the eigenvalue spectrum is positive and bounded away from zero, and this property does not change under training. Also, the evolution of f is linear in K up to corrections with are suppressed by 1/w. Hence evolution follows a convex trajectory and can achieve global minimum loss in a finite (polynomial) time. 
The parametric 1/w expansion may depend on quantities such as the smallest NTK eigenvalue k: the proof might require k >> 1/w or wk large. 
In the large w limit the function space has such high dimensionality that any typical initial f is close (within a ball of radius 1/w?) to an optimal f. These properties depend on specific choice of loss function.
See related remarks: ICML notes (2018).
It may turn out that the problems on which DL works well are precisely those in which the training data (and underlying generative processes) have a hierarchical structure which is sparse, level by level. Layered networks perform a kind of coarse graining (renormalization group flow): first layers filter by feature, subsequent layers by combinations of features, etc. But the whole thing can be understood as products of sparse filters, and the performance under training is described by sparse performance guarantees (ReLU = thresholded penalization?). Given the inherent locality of physics (atoms, molecules, cells, tissue; atoms, words, sentences, ...) it is not surprising that natural phenomena generate data with this kind of hierarchical structure.

Thursday, October 21, 2021

PRC Hypersonic Missiles, FOBS, and Qian Xuesen

There is a deep connection between the images above and below. Qian Xuesen proposed the boost glide trajectory while still at Caltech.

Background on recent PRC test of FOBS/glider hypersonic missile/vehicle. More from Air Force Secretary Frank Kendall. Detailed report on PRC hypersonic systems development. Reuters: Rocket failure mars U.S. hypersonic weapon test (10/21/21)

The situation today is radically different from when Qian first returned to China. In a decade or two China may have ~10x as many highly able scientists and engineers as the US, comparable to the entire world (ex-China) combined [1]. Already the depth of human capital in PRC is apparent to anyone closely watching their rate of progress (first derivative) in space (Mars/lunar lander, space station, LEO), advanced weapons systems (stealth jets, radar, missiles, jet engines), AI/ML, alternative energy, materials science, nuclear energy, fundamental and applied physics, consumer electronics, drones, advanced manufacturing, robotics, etc. etc. The development of a broad infrastructure base for advanced manufacturing and R&D also contributes to this progress, of course.

[1] It is trivial to obtain this ~10x estimate: PRC population is ~4x US population, a larger fraction of PRC students pursue STEM degrees, and a larger proportion of PRC students reach elite levels of math proficiency, e.g., PISA Level 6.

"It was the stupidest thing this country ever did," former Navy Secretary Dan Kimball later said, according to Aviation Week. "He was no more a Communist than I was, and we forced him to go." ... 
Qian Xuesen, a former Caltech rocket scientist who helped establish the Jet Propulsion Laboratory before being deported in 1955 on suspicion of being a Communist and who became known as the father of China's space and missile programs, has died. He was 98. ... 
Qian, a Chinese-born aeronautical engineer educated at Caltech and the Massachusetts Institute of Technology, was a protege of Caltech's eminent professor Theodore von Karman, who recognized him as an outstanding mathematician and "undisputed genius."

Below, a documentary on Qian and a movie-length biopic (English subtitles).

Tuesday, October 19, 2021

Quantum Hair from Gravity

New paper!
Quantum Hair from Gravity 
Xavier Calmet, Roberto Casadio, Stephen D. H. Hsu, and Folkert Kuipers 
We explore the relationship between the quantum state of a compact matter source and of its asymptotic graviton field. For a matter source in an energy eigenstate, the graviton state is determined at leading order by the energy eigenvalue. Insofar as there are no accidental energy degeneracies there is a one to one map between graviton states on the boundary of spacetime and the matter source states. A typical semiclassical matter source results in an entangled asymptotic graviton state. We exhibit a purely quantum gravitational effect which causes the subleading asymptotic behavior of the graviton state to depend on the internal structure of the source. These observations establish the existence of ubiquitous quantum hair due to gravitational effects.
From the introduction:
Classical no-hair theorems limit the information that can be obtained about the internal state of a black hole by outside observers [1]. External features (``hair'') of black hole solutions in general relativity are determined by specific conserved quantities such as mass, angular momentum, and charge. In this letter we investigate how the situation changes when both the matter source (black hole interior state) and the gravitational field itself are quantized. 
We begin by showing that the graviton state associated with an energy eigenstate source is determined, at leading order, by the energy eigenvalue of the source. These graviton states can be expressed as coherent states of non-propagating graviton modes, with explicit dependence on the source energy eigenvalue. Semiclassical matter sources (e.g., a star or black hole) are superpositions of energy eigenstates with support in some band of energies, and produce graviton states that are superpositions of the coherent states. ... We discuss implications for black hole information and holography in the conclusions.
General relativity relates the spacetime metric to the energy-momentum distribution of matter, but only applies when both the metric (equivalently, the gravitational field) and matter sources are semiclassical. A theory of quantum gravity is necessary to relate the quantum state of the gravitational field to the quantum state of the matter source. However, as we show in section 2 one can deduce this relationship either from a simple gedanken construction or from careful study of how the Gauss law affects quantization. It turns out the latter is common to both ordinary gauge theory (cf Coulomb potential) and gravity. 

Our results have important consequences for black hole information: they allow us to examine deviations from the semiclassical approximation used to calculate Hawking radiation and they show explicitly that the quantum spacetime of black hole evaporation is a complex superposition state.

See also 

Monday, October 18, 2021

Embryo Screening and Risk Calculus

Over the weekend The Guardian and The Times (UK) both ran articles on embryo selection. 

I recommend the first article. Philip Ball is an accomplished science writer and former scientist. He touches on many of the most important aspects of the topic, not easy given the length restriction he was working with. 

However I'd like to cover an aspect of embryo selection which is often missed, for example by the bioethicists quoted in Ball's article.

Several independent labs have published results on risk reduction from embryo selection, and all find that the technique is effective. But some people who are not following the field closely (or are not quantitative) still characterize the benefits -- incorrectly, in my view -- as modest. I honestly think they lack understanding of the actual numbers.

Some examples:
Carmi et al. find a ~50% risk reduction for schizophrenia from selecting the lowest risk embryo from a set of 5. For a selection among 2 embryos the risk reduction is ~30%. (We obtain a very similar result using empirical data: real adult siblings with known phenotype.) 
Visscher et al. find the following results, see Table 1 and Figure 2 in their paper. To their credit they compute results for a range of ancestries (European, E. Asian, African). We have performed similar calculations using siblings but have not yet published the results for all ancestries.  
Relative Risk Reduction (RRR)
Hypertension: 9-18% (ranges depend on specific ancestry) 
Type 2 Diabetes: 7-16% 
Coronary Artery Disease: 8-17% 
Absolute Risk Reduction (ARR)
Hypertension: 4-8.5% (ranges depend on specific ancestry) 
Type 2 Diabetes: 2.6-5.5% 
Coronary Artery Disease: 0.55-1.1%
I don't view these risk reductions as modest. Given that an IVF family is already going to make a selection they clearly benefit from the additional information that comes with genotyping each embryo. The cost is a small fraction of the overall cost of an IVF cycle.

But here is the important mathematical point which many people miss: We buy risk insurance even when the expected return is negative, in order to ameliorate the worst possible outcomes. 

Consider the example of home insurance. A typical family will spend tens of thousands of dollars over the years on home insurance, which protects against risks like fire or earthquake. However, very few homeowners (e.g., ~1 percent) ever suffer a really large loss! At the end of their lives, looking back, most families might conclude that the insurance was "a waste of money"!

So why buy the insurance? To avoid ruin in the event you are unlucky and your house does burn down. It is tail risk insurance.

Now consider an "unlucky" IVF family. At, say, the 1 percent level of "bad luck" they might have some embryos which are true outliers (e.g., at 10 times normal risk, which could mean over 50% absolute risk) for a serious condition like schizophrenia or breast cancer. This is especially likely if they have a family history. 

What is the benefit to this specific subgroup of families? It is enormous -- using the embryo risk score they can avoid having a child with very high likelihood of serious health condition. This benefit is many many times (> 100x!) larger than the cost of the genetic screening, and it is not characterized by the average risk reductions given above.

The situation is very similar to that of aneuploidy testing (screening against Down syndrome), which is widespread, not just in IVF. The prevalence of trisomy 21 (extra copy of chromosome 21) is only ~1 percent, so almost all families doing aneuploidy screening are "wasting their money" if one uses faulty logic! Nevertheless, the families in the affected category are typically very happy to have paid for the test, and even families with no trisomy warning understand that it was worthwhile.

The point is that no one knows ahead of time whether their house will burn down, or that one or more of their embryos has an important genetic risk. The calculus of average return is misleading -- i.e., it says that home insurance is a "rip off" when in fact it serves an important social purpose of pooling risk and helping the unfortunate. 

The same can be said for embryo screening in IVF -- one should focus on the benefit to "unlucky" families to determine the value. We can't identify the "unlucky" in advance, unless we do genetic screening!

Saturday, October 16, 2021

Dune 2021

I have high hopes for this new version of Dune.


Below, a re-post with two Frank Herbert interviews. Highly recommended to fans of the novel. 

The interviewer is Willis E. McNelly, a professor of English (specializing in science fiction). Herbert discusses artistic as well as conceptual decisions made in the writing and background world building for Dune. Highly recommended for any fan of the book.

See also Dune and The Butlerian Jihad and Darwin Among the Machines.
The Bene Gesserit program had as its target the breeding of a person they labeled "Kwisatz Haderach," a term signifying "one who can be many places at once." In simpler terms, what they sought was a human with mental powers permitting him to understand and use higher order dimensions.

They were breeding for a super-Mentat, a human computer with some of the prescient abilities found in Guild navigators. Now, attend these facts carefully:

Muad'Dib, born Paul Atreides, was the son of the Duke Leto, a man whose bloodline had been watched carefully for more than a thousand years. The Prophet's mother, Lady Jessica, was a natural daughter of the Baron Vladimir Harkonnen and carried gene-markers whose supreme importance to the breeding program was known for almost two thousand years. She was a Bene Gesserit bred and trained, and should have been a willing tool of the project.

The Lady Jessica was ordered to produce an Atreides daughter. The plan was to inbreed this daughter with Feyd-Rautha Harkonnen, a nephew of the Baron Vladimir, with the high probability of a Kwisatz Haderach from that union. Instead, for reasons she confesses have never been completely clear to her, the concubine Lady Jessica defied her orders and bore a son. This alone should have alerted the Bene Gesserit to the possibility that a wild variable had entered their scheme. But there were other far more important indications that they virtually ignored ...
"Kwisatz Haderach" is similar to the Hebrew "Kefitzat Haderech", which literally means "contracting the path"; Herbert defines Kwisatz Haderach as "the Shortening of the Way" (Dune: Appendix IV).

Another good recording of Herbert, but much later in his life.

Saturday, October 09, 2021

Leo Szilard, the Intellectual Bumblebee (lecture by William Lanouette)


This is a nice lecture on Leo Szilard by his biographer William Lanouette. See also ‘An Intellectual Bumblebee’ by Max Perutz.
Wikipedia: Leo Szilard was a Hungarian-American physicist and inventor. He conceived the nuclear chain reaction in 1933, patented the idea of a nuclear fission reactor in 1934, and in late 1939 wrote the letter for Albert Einstein's signature that resulted in the Manhattan Project that built the atomic bomb.
How Alexander Sachs, acting on behalf of Szilard and Einstein, narrowly convinced FDR to initiate the atomic bomb project: Contingency, History, and the Atomic Bomb

Szilard wrote children's stories and science fiction. His short story My Trial as a War Criminal begins after the USSR has defeated the US using biological weapons.
I was just about to lock the door of my hotel room and go to bed when there was a knock on the door and there stood a Russian officer and a young Russian civilian. I had expected something of this sort ever since the President signed the terms of unconditional surrender and the Russians landed a token occupation force in New York. The officer handed me something that looked like a warrant and said that I was under arrest as a war criminal on the basis of my activities during the Second World War in connection with the atomic bomb. There was a car waiting outside and they told me that they were going to take me to the Brookhaven National Laboratory on Long Island. Apparently, they were rounding up all the scientists who had ever worked in the field of atomic energy ...
This story was translated into Russian and it had a large impact on Andrei Sakharov, who showed it to his colleague Victor Adamsky:
A number of us discussed it. It was about a war between the USSR and the USA, a very devastating one, which brought victory to the USSR. Szilard and a number of other physicists are put under arrest and then face the court as war criminals for having created weapons of mass destruction. Neither they nor their lawyers could make up a cogent proof of their innocence. We were amazed by this paradox. You can’t get away from the fact that we were developing weapons of mass destruction. We thought it was necessary. Such was our inner conviction. But still the moral aspect of it would not let Andrei Dmitrievich and some of us live in peace.

See also The Many Worlds of Leo Szilard (APS symposium). Slides for Richard Garwin's excellent summary of Szilard's work, including nuclear physics, refrigeration, and Maxwell's Demon. One of Garwin's anecdotes:
Ted Puck was a distinguished biologist, originally trained in physics. ‘With the greatest possible reluctance I have come to the conclusion that it is not possible for me personally to work with you scientifically,’ he wrote Szilard. ‘Your mind is so much more powerful than mine that I find it impossible when I am with you to resist the tremendous polarizing forces of your ideas and outlook.’ Puck feared his ‘own flow of ideas would slow up & productivity suffer if we were to become continuously associated working in the same place and the same general kind of field.’ Puck said, ‘There is no living scientist whose intellect I respect more. But your tremendous intellectual force is a strain on a limited person like myself.’
Puck was a pioneer in single cell cloning, aided in part by Szilard:
When Szilard saw in 1954 that biologists Philip Marcus and Theodore Puck were having trouble growing individual cells into colonies, he concluded that “since cells grow with high efficiency when they have many neighbors, you should not let a single cell know it’s alone”. This was no flippant excursion into psychobiology. Rather, Szilard’s idea to use a layered feeder dish worked, while the open dish had not (Lanouette, 1992: 396–397).
After the war Szilard worked in molecular biology. This photo of Jacques Monod and Szilard is in the seminar room at Cold Spring Harbor Lab. Monod credits Szilard for the negative-feedback idea behind his 1965 Nobel prize.
“I have … recorded” in my Nobel lecture, said Monod, “how it was Szilard who decisively reconciled me with the idea (repulsive to me, until then) that enzyme induction reflected an anti-repressive effect, rather than the reverse, as I tried, unduly, to stick to.”


Friday, October 01, 2021

DNA forensics, genetic genealogy, and large databases (Veritasium video)


This is a good overview of DNA forensics, genetic genealogy, and existing databases like GEDmatch (Verogen).
@15:35 "Multiple law enforcement agencies have said that this is the most revolutionary tool they've had since the adoption of the fingerprint."
See Othram: the future of DNA forensics (2019):
The existing FBI standard (CODIS) for DNA identification uses only 20 markers (STRs -- previously only 13 loci were used!). By contrast, genome wide sequencing can reliably call millions of genetic variants. 
For the first time, the cost curves for these two methods have crossed: modern sequencing costs no more than extracting CODIS markers using the now ~30 year old technology. 
What can you do with millions of genetic markers? 
1. Determine relatedness of two individuals with high precision. This allows detectives to immediately identify a relative (ranging from distant cousin to sibling or parent) of the source of the DNA sample, simply by scanning through large DNA databases. 
More Othram posts.

Sunday, September 26, 2021

Picking Embryos With Best Health Odds Sparks New DNA Debate (Bloomberg Technology)

Bloomberg Technology covers polygenic embryo screening. Note, baby Aurea is well over a year old now. 

I am informed by Genomic Prediction's CEO that the company does genetic testing for ~200 IVF clinics on 6 continents. The overall scale of activity is increasing rapidly and also covers more traditional testing such as PGT-A (testing for aneuploidy or chromosomal normality) and testing for monogenic conditions, PGT-M. Here, PGT = Preimplantation Genetic Testing (standard terminology in IVF). 

I believe that polygenic screening, or PGT-P, will become very common in the near future. It is natural for parents to want as much information as possible to select the embryo that will become their child, and all of these types of testing can be performed simultaneously by GP using the same standard cell biopsy. Currently ~60% of all IVF embryos produced in the US (millions per year, worldwide) undergo some kind of genetic testing.
Picking Embryos With Best Health Odds Sparks New DNA Debate
By Carey Goldberg
Rafal Smigrodzki won’t make a big deal of it, but someday, when his toddler daughter Aurea is old enough to understand, he plans to explain that she likely made medical history at the moment of her birth.
Aurea appears to be the first child born after a new type of DNA testing that gave her a “polygenic risk score.” It’s based on multiple common gene variations that could each have tiny effects; together, they create higher or lower odds for many common diseases.
Her parents underwent fertility treatment in 2019 and had to choose which of four IVF embryos to implant. They turned to a young company called Genomic Prediction and picked the embryo given the best genetic odds of avoiding heart disease, diabetes and cancer in adulthood.
Smigrodzki, a North Carolina neurologist with a doctorate in human genetics, argues that parents have a duty to give a child the healthiest possible start in life, and most do their best. “Part of that duty is to make sure to prevent disease -- that’s why we give vaccinations,” he said. “And the polygenic testing is no different. It’s just another way of preventing disease.”
The choice was simple for him, but recent dramatic advances in the science of polygenic risk scoring raise issues so complex that The New England Journal of Medicine in July published a special report on the problems with using it for embryo selection.
‘Urgent’ Debate
The paper points to a handful of companies in the U.S. and Europe that already are offering embryo risk scores for conditions including schizophrenia, breast cancer and diabetes. It calls for an “urgent society-wide conversation.”
“We need to talk about what sort of regulation we want to have in this space,” said co-author Daniel Benjamin, an economist specializing in genetics -- or “genoeconomist” -- at UCLA.
Unlike the distant prospect of CRISPR-edited designer babies,“this is happening, and it is now,” he said. Many claims by companies that offer DNA-based eating or fitness advice are “basically bunk,” he added, “but this is real. The benefits are real, and the risks are real.”
Among the problems the journal article highlights: Most genetic data is heavily Eurocentric at this point, so parents with other ancestry can’t benefit nearly as much. The science is so new that huge unknowns remain. And selection could exacerbate health disparities among races and classes.
The article also raises concerns that companies marketing embryo selection over-promise, using enticements of “healthy babies” when the scores are only probabilities, not guarantees -- and when most differences among embryos are likely to be very small.
The issues are so complicated and new that the New England Journal article’s 13 authors held differing views on how polygenic embryo scoring should be regulated, said co-first author Patrick Turley, a University of Southern California economist. But all agreed that “potential consumers need to understand what they’re signing up for,” he said. 
I have thought this outcome inevitable since laboratory methods became advanced enough to obtain an accurate and inexpensive human genotype from a sample equivalent to the DNA in a few cells (2012 blog post). The information obtained can now be used to predict characteristics of the individual, with applications in assisted reproduction, health science, and even criminal forensics (Othram, Inc.).


Polygenic Embryo Screening: comments on Carmi et al. and Visscher et al. (discussion of the NEJM paper described in the Bloomberg article). 

Embryo Screening for Polygenic Disease Risk: Recent Advances and Ethical Considerations (Genes 2021 Special Issue)

Carey Goldberg is Boston bureau chief for Bloomberg. She appears in this recent WBUR On Point episode with Kathryn Paige Harden:


Compare to this 2013 "Genius Babies" episode of On Point in which I appeared.

Saturday, September 18, 2021

War Nerd on US-China-Taiwan

Highly recommended. Read this article, which will enable you to ignore 99% of mass media and 90% of "expert" commentary on this topic.
... The US/NATO command may be woofing just to get more ships and planes funded, but woofing can go badly wrong. The people you’re woofing at may think you really mean it. That’s what came very close to happening in the 1983 Able Archer NATO exercises. The woofing by Reagan and Thatcher in the leadup to those exercises was so convincing to the Soviet woof-ees that even the moribund USSR came close to responding in real—like nuclear—ways.
That’s how contingency plans, domestic political theatrics, and funding scams can feed into each other and lead to real wars.
Military forces develop contingency plans. That’s part of their job. Some of the plans to fight China are crazy, but some are just plausible enough to be worrying, because somebody might start thinking they could work. 
... What you do with a place like Xinjiang, if you’re a CIA/DoD planner, is file it under “promote insurgency” — meaning “start as many small fires as possible,” rather than “invade and begin a conventional war.”
And in the meantime, you keep working on the real complaints of the Uyghur and other non-Han ethnic groups, so that if you do need to start a conventional war in the Formosa Straits, you can use the Uyghur as a diversion, a sacrifice, by getting them to rise up and be massacred. Since there’s a big Han-Chinese population in Xinjiang, as the map shows, you can hope to stir up the sort of massacre/counter-massacre whipsaw that leaves evil memories for centuries, leading to a permanent weakening of the Chinese state.
This is a nasty strategy, but it’s a standard imperial practice, low-cost — for the empire, not the local population, of course. It costs those people everything, but empires are not sentimental about such things. 
... The Uyghur in Xinjiang would serve the same purpose as the Iraqi Kurds: “straw dogs destined for sacrifice.” If you want to get really cynical, consider that the reprisals they’d face from an enraged Chinese military would be even more useful to the US/NATO side than their doomed insurgency itself.
Atrocity propaganda is very important in 21st c warfare. At the moment, there’s no evidence of real, mass slaughter in Xinjiang, yet we’re already getting propaganda claims about it. Imagine what US/NATO could make out of the bloody aftermath of a doomed insurgency. Well, assuming that US/NATO survived a war with China, a pretty dicey assumption. More likely, CNN, BBC, and NYT would be the first to welcome our new overlords, Kent Brockman style. Those mainstream-media whores aren’t too bright but Lord, they’re agile. 
... Xinjiang, by contrast, can easily be imagined as One Giant Concentration Camp. After all, our leading “expert” on the province has never been there, and neither have his readers.
... The era of naval war based on carrier groups is over. They know that, even if they won’t say it.
If there’s a real war with China, the carriers will wait it out in San Diego harbor. I don’t say Honolulu, because even that wouldn’t be safe enough.
I’m not denigrating the courage or dedication of the crews and officers of USN vessels. At any level below JCOS, most of them are believers. But their belief is increasingly besieged and difficult to sustain, like an Episcopalian at Easter. You just can’t think too long about how cheap and effective antiship missiles are and still be a believer in aircraft carriers. As platforms of gunboat diplomacy against weak powers, they’re OK. 
... The thing is, and it’s weird you even have to say this: China is a big strong country coming out of an era of deep national humiliation and suffering, proud of its new prosperity. China’s success in lifting a desperately poor population into something like prosperity will likely be the biggest story from this era, when the canonical histories get distilled.
A nation hitting this stage is likely to include a lot of people, especially young men, who are itching to show what their country can do. Their patriotic eagerness is no doubt as gullible as most, but it’s real, and if you pay any attention in the online world, you can’t help seeing it.
People who mouth off about China never seem to imagine that anyone in China might hear, because as we are told over and over again, China-is-an-authoritarian-state. The implication is that nobody in China has any of the nationalistic fervor that we take for granted in our own Anglo states.
... Given the history of US/China relations, from the pogroms against Chinese immigrants to the Chinese Exclusion Act of 1882, through the demonization of Chinese mainlanders in the Cold War (which I remember distinctly from elementary school scare movies), the endless attempts to start insurgencies in Tibet, Xinjiang, and Fujian, to the nonstop violence and abuse of Asians in America, you don’t need to find reasons for Chinese people to want a war.
The odd thing is that most of them don’t seem to. That’s a remarkable testimony to the discipline and good sense of the Chinese public…so far. And it’s also, if you’re thinking clearly, a good reason not to keep provoking China in such gross, pointless ways. A population with that level of discipline and unity, matched with zooming prosperity, technical expertise, and pride on emerging from a long nightmare, is not one to woof at.
Of course the plan in the Pentagon is not real war. The plan is to slow China down, trip it up, “wrong-foot it” as they say in the Commonwealth. 
... So what will China do about Taiwan? China could take it right now, if it wanted to pay the price. Everyone knows that, though many fake-news sites have responded with childish, ridiculous gung-ho stories about how “Taiwan Could Win.” 
But will China invade? No. Not right now anyway. It doesn’t need to. The Chinese elite has its own constituencies, like all other polities (including “totalitarian” ones), and has to answer to them as circumstances change. 
So far China has been extraordinarily patient, a lot more patient than we’d be if China was promising to fight to the death for, say, Long Island. But that can change. Because, as I never tire of repeating, the enemy of the moment has constituencies too. And has to answer to them. 
So what happens if the US succeeds in hamstringing China’s economy? Welp, what’s the most reliable distraction a gov’t can find when it wants to unite a hard-pressed population against some distant enemy? 
That’s when China might actually do something about Taiwan. ...
See also Strategic Calculus of a Taiwan Invasion.

Note Added: Some readers may be alarmed that the War Nerd does not seem to accept the (Western) mass media propaganda about Xinjiang. Those readers might have poor memories, or are too young to know about, e.g., fake WMD or "babies taken out of incubators" or the countless other manufactured human rights abuses we read about in reliable journals like the New York Times or Washington Post.

Take these recent examples of US journalism on Afghanistan: 

The fake drone strike that killed 10 innocent family members, one of our last acts as we abandoned Afghanistan. (Fake because we probably did it just to show we could "strike back" at the bad guys.) Non-Western media reported this as a catastrophic failure almost immediately. But very few people in the US knew it until the Pentagon issued an apology in a late Friday afternoon briefing just recently. 

The drone strike was in retaliation for the suicide bombing at Kabul airport, in which (as reported by the Afghan government) ~200 people died. But evidence suggests that only a small fraction of these people were killed by bomb -- most of the 200 may have been shot by US and "coalition" (Turkish?) soldiers who might have panicked after the bombing. This is covered widely outside the US but not here.

If you want to understand the incredibly thin and suspicious sourcing of the "Uighur genocide" story, see here or just search for Adrian Zenz. 

Just a few years ago there were plenty of Western travelers passing through Xinjiang, even by bicycle, vlogging and posting their videos on YouTube. I followed these YouTubers at the time because of my own travel interest in western and south-western China, not for any political reason.

If you watch just a few of these you'll get an entirely different impression of the situation on the ground than you would get from Western media. For more, see this comment thread:
I want to be clear that because PRC is an authoritarian state their reaction to the Islamic terror attacks in Xinjiang circa 2015 was probably heavy handed and I am sure some of the sad stories told about people being arrested, held without trial, etc. are true. But I am also sure that if you visit Xinjiang and ask (non-Han) taxi drivers, restaurant owners, etc. about the level of tension you will get a very different impression than what is conveyed by Western media. 
No nation competing in geopolitics is without sin. One aspect of that sin (both in US and PRC): use of mass media propaganda to influence domestic public opinion. 
If you want to be "reality based" you need to look at the strongest evidence from both sides. 
Note to the credulous: The CIA venture fund InQTel was an investor in my first startup, which worked in crypto technology. We worked with CIA, VOA, NED ("National Endowment for Democracy" HA HA HA) on defeating the PRC firewall in the early internet era. I know a fair bit about how this all works -- NGO cutouts, fake journalists, policy grifters in DC, etc. etc. Civilians have no idea. 
At the time I felt (and still sort of feel) that keeping the internet free and open is a noble cause. But do I know FOR SURE that state security works DIRECTLY with media and NGOs to distort the truth (i.e., lies to the American people, Iraq WMD yada yada). Yes, I know for sure and it's easy to detect the pattern just by doing a tiny bit of research on people like Cockerell or Zenz. 
Keep in mind I'm not a "dove" -- MIC / intel services / deep state *has to* protect against worst case outcomes and assume the worst about other states. 
They have to do nasty stuff. I'm not making moral judgements here. But a *consequence* of this is that you have to be really careful about information sources in order to stay reality based...

Blog Archive