Friday, December 31, 2021

Happy New Year 2022!

Best wishes to everyone :-)

I posted this video some years ago, but it was eventually taken down by YouTube. I came across it today and thought I would share it again. 

The documentary includes interviews with Rabi, Ulam, Bethe, Frank Oppenheimer, Robert Wilson, and Dyson


 


Some other recommendations below. I recently re-listened to these podcasts and quite enjoyed them. The interview with Bobby covers brain mapping, neuroscience, careers in science, biology vs physics. With Ted we go deep into free will, parallel universes, science fiction, and genetic engineering. Bruno shares his insights on geopolitics -- the emerging multipolar world of competition and cooperation between the US, Russia, Europe, and China.









A hopeful note for 2022 and the pandemic:

I followed COVID closely at the beginning (early 2020; search on blog if interested). I called the pandemic well before most people, and even provided some useful advice to a few big portfolio managers as well as to Dom and his team in the UK government. But once I realized that 

the average level among political leaders and "public intellectuals" is too low for serious cost-benefit analysis,

I got bored of COVID and stopped thinking about it.

However with Omicron (thanks to a ping from Dom) I started to follow events again. Preliminary data suggest we may be following the evolutionary path of increased transmissibility but reduced lethality. 

The data from UK and SA already seem to strongly support this conclusion, although both populations have at least one of: high vaccination level / resistance from spread of earlier variants. Whether Omicron is "intrinsically" less lethal (i.e., to a population such as the unvaccinated ~40% of the PRC population that has never been exposed to COVID) remains to be seen but we should know within a month or so.

If, e.g., Omicron results in hospitalization / death at ~1/3 the rate of earlier variants, then we will already be in the flu-like range of severity (whereas original COVID was at most like a ~10x more severe flu). In this scenario rational leaders should just go for herd immunity (perhaps with some cocooning of vulnerable sub-populations) and get it over with.

I'll be watching some of the more functional countries like S. Korea, PRC, etc. to see when/if they relax their strict lockdown and quarantine policies. Perhaps there are some smaller EU countries to keep an eye on as well.

Friday, December 24, 2021

Peace on Earth, Good Will to Men 2021



When asked what I want for Christmas, I reply: Peace On Earth, Good Will To Men :-)

No one ever seems to recognize that this comes from the Bible (Luke 2.14).

Linus said it best in A Charlie Brown Christmas:
And there were in the same country shepherds abiding in the field, keeping watch over their flock by night.

And, lo, the angel of the Lord came upon them, and the glory of the Lord shone round about them: and they were sore afraid.

And the angel said unto them, Fear not: for, behold, I bring you good tidings of great joy, which shall be to all people.

For unto you is born this day in the city of David a Saviour, which is Christ the Lord.

And this shall be a sign unto you; Ye shall find the babe wrapped in swaddling clothes, lying in a manger.

And suddenly there was with the angel a multitude of the heavenly host praising God, and saying,

Glory to God in the highest, and on earth peace, good will toward men.

Merry Christmas!

Please accept my best wishes and hopes for a wonderful 2022. Be of good cheer, for we shall prevail! :-) 


The first baby conceived from an embryo screened with Genomic Prediction preimplantation genetic testing for polygenic risk scores (PGT-P) was born in mid-2020. 

First Baby Born from a Polygenically Screened Embryo (video panel)




Embryo Screening for Polygenic Disease Risk: Recent Advances and Ethical Considerations (Genes 2021 Special Issue)
It is a great honor to co-author a paper with Simon Fishel, the last surviving member of the team that produced the first IVF baby (Louise Brown) in 1978. His mentors and collaborators were Robert Edwards (Nobel Prize 2010) and Patrick Steptoe (passed before 2010). ... 
Today millions of babies are produced through IVF. In most developed countries roughly 3-5 percent of all births are through IVF, and in Denmark the fraction is about 10 percent! But when the technology was first introduced with the birth of Louise Brown in 1978, the pioneering scientists had to overcome significant resistance. 
There may be an alternate universe in which IVF was not allowed to develop, and those millions of children were never born. 
Wikipedia: ...During these controversial early years of IVF, Fishel and his colleagues received extensive opposition from critics both outside of and within the medical and scientific communities, including a civil writ for murder.[16] Fishel has since stated that "the whole establishment was outraged" by their early work and that people thought that he was "potentially a mad scientist".[17] 
I predict that within 5 years the use of polygenic risk scores will become common in health systems (i.e., for adults) and in IVF. Reasonable people will wonder why the technology was ever controversial at all, just as in the case of IVF.

Genomic Prediction has now performed embryo genetic tests for  ~200 IVF clinics on six continents. Millions of embryos are screened each year, worldwide.


Six years ago on Christmas day I shared the Nativity 2050 story below. 


And the angel said unto them, Fear not: for, behold, I bring you good tidings of great joy, which shall be to all people.
Mary was born in the twenties, when the tests were new and still primitive. Her mother had frozen a dozen eggs, from which came Mary and her sister Elizabeth. Mary had her father's long frame, brown eyes, and friendly demeanor. She was clever, but Elizabeth was the really brainy one. Both were healthy and strong and free from inherited disease. All this her parents knew from the tests -- performed on DNA taken from a few cells of each embryo. The reports came via email, from GP Inc., by way of the fertility doctor. Dad used to joke that Mary and Elizabeth were the pick of the litter, but never mentioned what happened to the other fertilized eggs.

Now Mary and Joe were ready for their first child. The choices were dizzying. Fortunately, Elizabeth had been through the same process just the year before, and referred them to her genetic engineer, a friend from Harvard. Joe was a bit reluctant about bleeding edge edits, but Mary had a feeling the GP engineer was right -- their son had the potential to be truly special, with just the right tweaks ...

Friday, December 17, 2021

Macroscopic Superposition States: entanglement of a macroscopic living organism (tardigrade) with a superconducting qubit


I have waited for this development since 2009 (see old post below). 
 
The fact that a macroscopic, living organism can be placed in a superposition state may come as a shock to many people, including a number of physicists. 

If a tardigrade can exist in a superposition state, why can't you? 

Are you in a superposition state right now? 

Is there some special class of objects that "collapse wavefunctions"? (Copenhagen) ... It's ridiculous, absurd. In any case we now know that tardigrades are not in that class.

Entanglement between superconducting qubits and a tardigrade 
https://arxiv.org/pdf/2112.07978.pdf 
K. S. Lee et al. 
Quantum and biological systems are seldom discussed together as they seemingly demand opposing conditions. Life is complex, "hot and wet" whereas quantum objects are small, cold and well controlled. Here, we overcome this barrier with a tardigrade -- a microscopic multicellular organism known to tolerate extreme physiochemical conditions via a latent state of life known as cryptobiosis. We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of 6×10−6 mbar, setting a new record for the conditions that a complex form of life can survive.

From the paper: 

In our experiments, we use specimens of a Danish population of Ramazzottius varieornatus Bertolani and Kinchin, 1993 (Eutardigrada, Ramazzottiidae). The species belongs to phylum Tardigrada comprising of microscopic invertebrate animals with an adult length of 50-1200 µm [12]. Importantly, many tardigrades show extraordinary survival capabilities [13] and selected species have previously been exposed to extremely low temperatures of 50 mK [14] and low Earth orbit pressures of 10−19 mbar [15]. Their survival in these extreme conditions is possible thanks to a latent state of life known as cryptobiosis [2, 13]. Cryptobiosis can be induced by various extreme physicochemical conditions, including freezing and desiccation. Specifically, during desiccation, tardigrades reduce volume and contract into an ametabolic state, known as a “tun”. Revival is achieved by reintroducing the tardigrade into liquid water at atmospheric pressure. In the current experiments, we used dessicated R. varieornatus tuns with a length of 100-150 µm. Active adult specimens have a length of 200-450 µm. The revival process typically takes several minutes. 
We place a tardigrade tun on a superconducting transmon qubit and observe coupling between the qubit and the tardigrade tun via a shift in the resonance frequency of the new qubit-tardigrade system. This joint qubit-tardigrade system is then entangled with a second superconducting qubit. We reconstruct the density matrix of this coupled system experimentally via quantum state tomography. Finally, the tardigrade is removed from the superconducting qubit and reintroduced to atmospheric pressure and room temperature. We observe the resumption of its active metabolic state in water.
****

Note Added: I wrote this post the day after getting a Covid booster and a shingles vaccine, so I was a little zonked out and was not able to look at the details at the time. 

The authors claim that the B qubit states B0 and B1 are entangled with two different internal states of T (tardigrade): B0 T0 , B1 T1. Then they further entangle B with the other qubit A to make more complex states. 

In the supplement they analyze the density matrix for this d=8 Hilbert space, and claim to have measured quantities which imply tripartite entanglement. The results seem to depend on theoretical modeling -- I don't think they made any direct measurements on T. 

They do not present any uncertainty analysis of the tripartite entanglement measure π.

The line in the main body of the paper that sounds convincing is We reconstruct the density matrix of this coupled system experimentally via quantum state tomography (see Fig 3), but the devil is in the details:
... a microscopic model where the charges inside the tardigrade are represented as effective harmonic oscillators that couple to the electric field of the qubit via the dipole mechanism... [This theoretical analysis results in the B0 T0 , B1 T1 system where T0 T1 are effective qubits formed of tardigrade internal degrees of freedom.] 
... We applied 16 different combinations of one-qubit gates on qubit A and dressed states of the joint qubit B-tardigrade system. We then jointly readout the state of both qubits using the cavity ...
      
Some commentary online is very skeptical of their claims, see here for example.

More (12/23/2021): One of the co-authors is Vlatko Vedral, a well-known theorist who works in this area. His recent blog post Entangled Tardigrades is worth a look. 

After thinking a bit more, the  B0 T0 , B1 T1  description of the system seems plausible to me. So, although they don't make direct measurements on T (only on the combined B-T system), it does seem reasonable to assert that the tardigrade (or at least some collective degree of freedom related to internal charges of it) has been placed into a superposition state. 

****


See this 2009 post: Schrodinger's virus
If the creature above (a tardigrade arthropod) can be placed in a superposition state, will you accept that you probably can be as well? And once you admit this, will you accept that you probably actually DO exist in a superposition state already?
It may be disturbing to learn that we live in a huge quantum multiverse, but was it not also disturbing for Galileo's contemporaries to learn that we live on a giant rotating sphere, hurtling through space at 30 kilometers per second? E pur si muove!
Related posts:  Gork revisited 2018  ,  Feynman and Everett




Of course there is no wavefunction collapse, only unitary evolution. 

Many people are confused about this -- they have not recovered from what they were taught as beginning students. They still believe in the Tooth Fairy ;-) 

You are Gork! 


Gork is a robot name made up by Sidney Coleman for his talk Quantum Mechanics, In Your Face! (video, Gork @40m or so). Before the word entanglement became fashionable, Sidney summarized this talk to me in his office as "Quantum Mechanics is just a theory of correlations, and we live in this tangle of correlations." He may not have said "tangle" -- I am not sure. But he was describing the Everett formulation, trying not to scare a young postdoc :-)


Macroscopic Superpositions in Isolated Systems 
R. Buniy and S. Hsu 
arXiv:2011.11661, to appear in Foundations of Physics 
For any choice of initial state and weak assumptions about the Hamiltonian, large isolated quantum systems undergoing Schrodinger evolution spend most of their time in macroscopic superposition states. The result follows from von Neumann's 1929 Quantum Ergodic Theorem. As a specific example, we consider a box containing a solid ball and some gas molecules. Regardless of the initial state, the system will evolve into a quantum superposition of states with the ball in macroscopically different positions. Thus, despite their seeming fragility, macroscopic superposition states are ubiquitous consequences of quantum evolution. We discuss the connection to many worlds quantum mechanics.

2021 witnessed other demonstrations of macroscopic entanglement: Quantum entanglement of two macroscopic objects is the Physics World 2021 Breakthrough of the Year.
... Quantum technology has made great strides over the past two decades and physicists are now able to construct and manipulate systems that were once in the realm of thought experiments. One particularly fascinating avenue of inquiry is the fuzzy border between quantum and classical physics. In the past, a clear delineation could be made in terms of size: tiny objects such as photons and electrons inhabit the quantum world whereas large objects such as billiard balls obey classical physics.
Over the past decade, physicists have been pushing the limits of what is quantum using drum-like mechanical resonators measuring around 10 microns across. Unlike electrons or photons, these drumheads are macroscopic objects that are manufactured using standard micromachining techniques and appear as solid as billiard balls in electron microscope images (see figure). Yet despite the resonators’ tangible nature, researchers have been able to observe their quantum properties, for example, by putting a device into its quantum ground state as Teufel and colleagues did in 2017.
This year, teams led by Teufel and Kotler and independently by Sillanpää went a step further, becoming the first to quantum-mechanically entangle two such drumheads. The two groups generated their entanglement in different ways. While the Aalto/Canberra team used a specially chosen resonant frequency to eliminate noise in the system that could have disturbed the entangled state, the NIST group’s entanglement resembled a two-qubit gate in which the form of the entangled state depends on the initial states of the drumheads. ...

Monday, December 13, 2021

Quantum Hair and Black Hole Information

Our follow up paper on quantum hair is now on arXiv:
Quantum Hair and Black Hole Information 
https://arxiv.org/abs/2112.05171 
Xavier Calmet, Stephen D.H. Hsu 
It has been shown that the quantum state of the graviton field outside a black hole horizon carries information about the internal state of the hole. We explain how this allows unitary evaporation: the final radiation state is a complex superposition which depends linearly on the initial black hole state. Under time reversal, the radiation state evolves back to the original black hole quantum state. Formulations of the information paradox on a fixed semiclassical geometry describe only a small subset of the evaporation Hilbert space, and do not exclude overall unitarity.
This is the sequel to our earlier paper Quantum Hair from Gravity in which we first showed that the quantum state of the graviton field outside the black hole is determined by the quantum state of the interior.
Our results have important consequences for black hole information: they allow us to examine deviations from the semiclassical approximation used to calculate Hawking radiation and they show explicitly that the quantum spacetime of black hole evaporation is a complex superposition state.
The new paper describes Hawking evaporation of a black hole taking into account the quantum state of the exterior geometry.




After the first quantum hair paper appeared, I wrote a long post (November 14 2021) describing Hawking's black hole information paradox, which I excerpt from below. 



In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
https://arxiv.org/abs/hep-th/9209058 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.


These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 


BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.


3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 


This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
https://arxiv.org/abs/2012.05770 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 


##########################


Below is a very nice talk by Raju given at the IAS workshop on Quantum Information and Spacetime just a week ago. Raju emphasizes that the external and internal BH states do not factorize, which is a key assumption in the information paradox. Quantum hair prevents factorization: it entangles the interior and exterior of the BH. 


Friday, December 10, 2021

Elizabeth Carr: First US IVF baby and Genomic Prediction patient advocate (The Sunday Times podcast)


I don't have an embed link so click here to listen to the podcast.
Genomic Prediction’s Elizabeth Carr: “Scoring embryos”  
The Sunday Times’ tech correspondent Danny Fortson brings on Elizabeth Carr, America’s first baby conceived by in-vitro fertilization and patient advocate at Genomic Prediction, to talk about the new era of pre-natal screening (5:45), the dawn of in-vitro fertilization (8:40), the technology’s acceptance (12:10), what Genomic Prediction does (13:40), scoring embryos (16:30), the slippery slope (19:20), selecting for smarts (24:15), the cost (25:00), and the future of conception (28:30). PLUS Dan Benjamin, bio economist at UCLA, comes on to talk about why he and others raised the alarm about polygenic scoring (30:20), drawing the line between prevention and enhancement (34:15), limits of the tech (37:15), what else we can select for (40:00), and unexpected consequences (42:00). DEC 3, 2021 

This is an earlier podcast I did with Elizabeth and IVF physician Serena Chen (IRMS and Rutgers University Medical School).

See also

First Baby Born from a Polygenically Screened Embryo (video panel)




Embryo Screening for Polygenic Disease Risk: Recent Advances and Ethical Considerations (Genes 2021 Special Issue)
It is a great honor to co-author a paper with Simon Fishel, the last surviving member of the team that produced the first IVF baby (Louise Brown) in 1978. His mentors and collaborators were Robert Edwards (Nobel Prize 2010) and Patrick Steptoe (passed before 2010). ... 
Today millions of babies are produced through IVF. In most developed countries roughly 3-5 percent of all births are through IVF, and in Denmark the fraction is about 10 percent! But when the technology was first introduced with the birth of Louise Brown in 1978, the pioneering scientists had to overcome significant resistance. 
There may be an alternate universe in which IVF was not allowed to develop, and those millions of children were never born. 
Wikipedia: ...During these controversial early years of IVF, Fishel and his colleagues received extensive opposition from critics both outside of and within the medical and scientific communities, including a civil writ for murder.[16] Fishel has since stated that "the whole establishment was outraged" by their early work and that people thought that he was "potentially a mad scientist".[17] 
I predict that within 5 years the use of polygenic risk scores will become common in some health systems (i.e., for adults) and in IVF. Reasonable people will wonder why the technology was ever controversial at all, just as in the case of IVF.

Friday, December 03, 2021

Adventures of a Mathematician: Ulam, von Karman, Wiener, and the Golem

 

Ulam's Adventures of a Mathematician was recently made into a motion picture -- see trailer above. 

I have an old copy purchased from the Caltech bookstore. When I flip through the book it never fails to reward with a wonderful anecdote from an era of giants.
[Ulam] ... In Israel many years later, while I was visiting the town of Safed with von Kárman, an old Orthodox Jewish guide with earlocks showed me the tomb of Caro in an old graveyard. When I told him that I was related to a Caro, he fell on his knees... Aunt Caro was directly related to the famous Rabbi Loew of sixteenth-century Prague, who, the legend says, made the Golem — the earthen giant who was protector of the Jews. (Once, when I mentioned this connection with the Golem to Norbert Wiener, he said, alluding to my involvement with Los Alamos and with the H-bomb, "It is still in the family!")



See also von Neumann: "If only people could keep pace with what they create"
One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari: 
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country. The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..." 
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills. 
In his obituary for John von Neumann, Ulam recalled a conversation with vN about the 
"... ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." 
This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)
More Ulam from this blog, including:
[p.107] I told Banach about an expression Johnny had used with me in Princeton before stating some non-Jewish mathematician's result, "Die Goim haben den folgenden satz beweisen" (The goys have proved the following theorem). Banach, who was pure goy, thought it was one of the funniest sayings he had ever heard. He was enchanted by its implication that if the goys could do it, then Johnny and I ought to be able to do it better. Johnny did not invent this joke, but he liked it and we started using it.

Saturday, November 27, 2021

Social and Educational Mobility: Denmark vs USA (James Heckman)




Despite generous social programs such as free pre-K education, free college, and massive transfer payments, Denmark is similar to the US in key measures of inequality, such as educational outcomes and cognitive test scores. 

While transfer payments can equalize, to some degree, disposable income, they do not seem to be able to compensate for large family effects on individual differences in development. 

These observations raise the following questions: 

1. What is the best case scenario for the US if all progressive government programs are implemented with respect to child development, free high quality K12 education, free college, etc.?

2. What is the causal mechanism for stubborn inequality of outcomes, transmitted from parent to child (i.e., within families)? 

Re #2: Heckman and collaborators focus on environmental factors, but do not (as far as I can tell) discuss genetic transmission. We already know that polygenic scores are correlated to the education and income levels of parents, and (from adoption studies) that children tend to resemble their biological parents much more strongly than their adoptive parents. These results suggest that genetic transmission of inequality may dominate environmental transmission.
  
See 



The Contribution of Cognitive and Noncognitive Skills to Intergenerational Social Mobility (McGue et al. 2020)


Note: Denmark is very homogenous in ancestry, and the data presented in these studies (e.g., polygenic scores and social mobility) are also drawn from European-ancestry cohorts. The focus here is not on ethnicity or group differences between ancestry groups. The focus is on social and educational mobility within European-ancestry populations, with or without generous government programs supporting free college education, daycare, pre-K, etc.

Lessons for Americans from Denmark about inequality and social mobility 
James Heckman and Rasmus Landersø 
Abstract Many progressive American policy analysts point to Denmark as a model welfare state with low levels of income inequality and high levels of income mobility across generations. It has in place many social policies now advocated for adoption in the U.S. Despite generous Danish social policies, family influence on important child outcomes in Denmark is about as strong as it is in the United States. More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects. Powerful forces not easily mitigated by Danish-style welfare state programs operate in both countries.
Also discussed in this episode of EconTalk podcast. Russ does not ask the obvious question about disentangling family environment from genetic transmission of inequality.
 

The figure below appears in Game Over: Genomic Prediction of Social Mobility. It shows SNP-based polygenic score and life outcome (socioeconomic index, on vertical axis) in four longitudinal cohorts, one from New Zealand (Dunedin) and three from the US. Each cohort (varying somewhat in size) has thousands of individuals, ~20k in total (all of European ancestry). The points displayed are averages over bins containing 10-50 individuals. For each cohort, the individuals have been grouped by childhood (family) social economic status. Social mobility can be predicted from polygenic score. Note that higher SES families tend to have higher polygenic scores on average -- which is what one might expect from a society that is at least somewhat meritocratic. The cohorts have not been used in training -- this is true out-of-sample validation. Furthermore, the four cohorts represent different geographic regions (even, different continents) and individuals born in different decades.




The figure below appears in More on SES and IQ.

Where is the evidence for environmental effects described above in Heckman's abstract: "More advantaged families are better able to access, utilize, and influence universally available programs. Purposive sorting by levels of family advantage create neighborhood effects"? Do parents not seek these advantages for their adopted children as well as for their biological children? Or is there an entirely different causal mechanism based on shared DNA?

 


 

Sunday, November 14, 2021

Has Hawking's Black Hole Information Paradox Been Resolved?



In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
https://arxiv.org/abs/hep-th/9209058 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.


These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 


BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.


3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results described at bottom, it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 


This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
https://arxiv.org/abs/2012.05770 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 

The discussion below suggests that each internal BH state described by the coefficients { c_n } results in a different final radiation state -- i.e., the process can be unitary.





Note Added

In the comments David asks about the results described in this 2020 Quanta article The Most Famous Paradox in Physics Nears Its End

I thought about discussing those results in the post, but 1. it was already long, and 2. they are using a very different AdS approach. 

However, Raju does discuss these papers in his review. 

Most of the theorists in the Quanta article accept the basic formulation of the information paradox, so it's surprising to them that they see indications of unitary black hole evaporation. As I mentioned in the post I don't think the paradox itself is well-established, so I am not surprised. 

I think that the quantum hair results are important because they show explicitly that the internal state of the hole affects the quantum state of the graviton field, which then influences the Hawking radiation production. 

It was pointed out by Papadodimos and Raju, and also in my 2013 paper arXiv:1308.5686, that tiny correlations in the radiation density matrix could purify it. That is, the Hawking density matrix plus exp(-S) corrections (which everyone expects are there) could result from a pure state in the large Hilbert space Y, which has dimensionality ~ exp(+S). This is related to what I wrote in the post: start with a pure state in Y and trace over the complement of X. The resulting ρ(X) is exponentially close to thermal (maximally mixed) even though it came from a pure state.

Wednesday, November 10, 2021

Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity (published version)

This is the published version of our recent paper. See previous discussion of the arXiv preprint: Finitism and Physics.
Physics Letters B Volume 823, 10 December 2021, 136763 
Fundamental limit on angular measurements and rotations from quantum mechanics and general relativity 
Xavier Calmet and Stephen D.H. Hsu 
https://doi.org/10.1016/j.physletb.2021.136763 
Abstract 
We show that the precision of an angular measurement or rotation (e.g., on the orientation of a qubit or spin state) is limited by fundamental constraints arising from quantum mechanics and general relativity (gravitational collapse). The limiting precision is 1/r in Planck units, where r is the physical extent of the (possibly macroscopic) device used to manipulate the spin state. This fundamental limitation means that spin states cannot be experimentally distinguished from each other if they differ by a sufficiently small rotation. Experiments cannot exclude the possibility that the space of quantum state vectors (i.e., Hilbert space) is fundamentally discrete, rather than continuous. We discuss the implications for finitism: does physics require infinity or a continuum?

In the revision we edited the second paragraph below to clarify the history regarding Hilbert's program, Gödel, and the status of the continuum in analysis. The continuum was quite controversial at the time and was one of the primary motivations for Hilbert's axiomatization. There is a kind of modern middle-brow view that epsilon and delta proofs are sufficient to resolve the question of rigor in analysis, but this ignores far more fundamental problems that forced Hilbert, von Neumann, Weyl, etc. to resort to logic and set theory.

In the early 20th century little was known about neuroscience (i.e., our finite brains made of atoms), and it had not been appreciated that the laws of physics themselves might contain internal constraints that prevent any experimental test of infinitely continuous structures. Hence we can understand Weyl's appeal to human intuition as a basis for the mathematical continuum (Platonism informed by Nature; Gödel was also a kind of Platonist), even if today it appears implausible. Now we suspect that our minds are simply finite machines and nothing more, and that Nature itself does not require a continuum -- i.e., it can be simulated perfectly well with finitary processes.

It may come as a surprise to physicists that infinity and the continuum are even today the subject of debate in mathematics and the philosophy of mathematics. Some mathematicians, called finitists, accept only finite mathematical objects and procedures [30]. The fact that physics does not require infinity or a continuum is an important empirical input to the debate over finitism. For example, a finitist might assert (contra the Platonist perspective adopted by many mathematicians) that human brains built from finite arrangements of atoms, and operating under natural laws (physics) that are finitistic, are unlikely to have trustworthy intuitions concerning abstract concepts such as the continuum. These facts about the brain and about physical laws stand in contrast to intuitive assumptions adopted by many mathematicians. For example, Weyl (Das Kontinuum [26], [27]) argues that our intuitions concerning the continuum originate in the mind's perception of the continuity of space-time.
There was a concerted effort beginning in the 20th century to place infinity and the continuum on a rigorous foundation using logic and set theory. As demonstrated by Gödel, Hilbert's program of axiomatization using finitary methods (originally motivated, in part, by the continuum in analysis) could not succeed. Opinions are divided on modern approaches which are non-finitary. For example, the standard axioms of Zermelo-Fraenkel (ZFC) set theory applied to infinite sets lead to many counterintuitive results such as the Banach-Tarski Paradox: given any two solid objects, the cut pieces of either one can be reassembled into the other [28]. When examined closely all of the axioms of ZFC (e.g., Axiom of Choice) are intuitively obvious if applied to finite sets, with the exception of the Axiom of Infinity, which admits infinite sets. (Infinite sets are inexhaustible, so application of the Axiom of Choice leads to pathological results.) The Continuum Hypothesis, which proposes that there is no cardinality strictly between that of the integers and reals, has been shown to be independent (neither provable nor disprovable) in ZFC [29]. Finitists assert that this illustrates how little control rigorous mathematics has on even the most fundamental properties of the continuum.
Weyl was never satisfied that the continuum and classical analysis had been placed on a solid foundation.
Das Kontinuum (Stanford Encyclopedia of Philosophy)
Another mathematical “possible” to which Weyl gave a great deal of thought is the continuum. During the period 1918–1921 he wrestled with the problem of providing the mathematical continuum—the real number line—with a logically sound formulation. Weyl had become increasingly critical of the principles underlying the set-theoretic construction of the mathematical continuum. He had come to believe that the whole set-theoretical approach involved vicious circles[11] to such an extent that, as he says, “every cell (so to speak) of this mighty organism is permeated by contradiction.” In Das Kontinuum he tries to overcome this by providing analysis with a predicative formulation—not, as Russell and Whitehead had attempted, by introducing a hierarchy of logically ramified types, which Weyl seems to have regarded as excessively complicated—but rather by confining the comprehension principle to formulas whose bound variables range over just the initial given entities (numbers). Accordingly he restricts analysis to what can be done in terms of natural numbers with the aid of three basic logical operations, together with the operation of substitution and the process of “iteration”, i.e., primitive recursion. Weyl recognized that the effect of this restriction would be to render unprovable many of the central results of classical analysis—e.g., Dirichlet’s principle that any bounded set of real numbers has a least upper bound[12]—but he was prepared to accept this as part of the price that must be paid for the security of mathematics.
As Weyl saw it, there is an unbridgeable gap between intuitively given continua (e.g. those of space, time and motion) on the one hand, and the “discrete” exact concepts of mathematics (e.g. that of natural number[13]) on the other. The presence of this chasm meant that the construction of the mathematical continuum could not simply be “read off” from intuition. It followed, in Weyl’s view, that the mathematical continuum must be treated as if it were an element of the transcendent realm, and so, in the end, justified in the same way as a physical theory. It was not enough that the mathematical theory be consistent; it must also be reasonable.
Das Kontinuum embodies Weyl’s attempt at formulating a theory of the continuum which satisfies the first, and, as far as possible, the second, of these requirements. In the following passages from this work he acknowledges the difficulty of the task:
… the conceptual world of mathematics is so foreign to what the intuitive continuum presents to us that the demand for coincidence between the two must be dismissed as absurd. (Weyl 1987, 108)
… the continuity given to us immediately by intuition (in the flow of time and of motion) has yet to be grasped mathematically as a totality of discrete “stages” in accordance with that part of its content which can be conceptualized in an exact way. 
See also The History of the Planck Length and the Madness of Crowds.

Tuesday, November 09, 2021

The Balance of Power in the Western Pacific and the Death of the Naval Surface Ship

Recent satellite photos suggest that PLARF (People's Liberation Army Rocket Forces) have been testing against realistic moving ship targets in the deserts of the northwest. Note the ship model is on rails in the second photo below. Apparently there are over 30km of rail lines, allowing the simulation of evasive maneuvers by an aircraft carrier (third figure below).


Large surface ships such as aircraft carriers are easy to detect (e.g., satellite imaging via radar sensors), and missiles (especially those with maneuver capability) are very difficult to stop. Advances in AI / machine learning tend to favor missile targeting, not defense of carriers. 

The key capability is autonomous final target acquisition by the missile at a range of tens of km -- i.e., the distance the ship can move during missile flight time after launch. State of the art air to air missiles already do this in BVR (Beyond Visual Range) combat. Note, they are much smaller than anti-ship missiles, with presumably much smaller radar seekers, yet are pursuing a smaller, faster, more maneuverable target (enemy aircraft). 

It seems highly likely that the technical problem of autonomous targeting of a large surface ship during final missile approach has already been solved some time ago by the PLARF. 

With this capability in place one only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, etc. Obviously if the satellite retains coverage of the ship during the entire attack, and can communicate with the missile, even this smart final targeting is not required.

This is what a ship looks like to Synthetic Aperture Radar (SAR) from Low Earth Orbit (LEO).  PRC has had a sophisticated system (Yaogan) in place for almost a decade, and continues to launch new satellites for this purpose.



See LEO SAR, hypersonics, and the death of the naval surface ship:

In an earlier post we described how sea blockade (e.g., against Japan or Taiwan) can be implemented using satellite imaging and missiles, drones, AI/ML. Blue water naval dominance is not required. 
PLAN/PLARF can track every container ship and oil tanker as they approach Kaohsiung or Nagoya. All are in missile range -- sitting ducks. Naval convoys will be just as vulnerable. 
Sink one tanker or cargo ship, or just issue a strong warning, and no shipping company in the world will be stupid enough to try to run the blockade. 

But, But, But, !?! ...
USN guy: We'll just hide the carrier from the satellite and missile seekers using, you know, countermeasures! [Aside: don't cut my carrier budget!] 
USAF guy: Uh, the much smaller AESA/IR seeker on their AAM can easily detect an aircraft from much longer ranges. How will you hide a huge ship? 
USN guy: We'll just shoot down the maneuvering hypersonic missile using, you know, methods. [Aside: don't cut my carrier budget!] 
Missile defense guy: Can you explain to us how to do that? If the incoming missile maneuvers we have to adapt the interceptor trajectory (in real time) to where we project the missile to be after some delay. But we can't know its trajectory ahead of time, unlike for a ballistic (non-maneuvering) warhead.
More photos and maps in this 2017 post.

Monday, November 01, 2021

Preimplantation Genetic Testing for Aneuploidy: New Methods and Higher Pregnancy Rates


[ NOTE ADDED NOVEMBER 12 2021: Research seminar videos from ASRM are embargoed until 12/31. So this video will not be available until then. ]

This talk describes a study of PGT-A (Preimplantation Genetic Testing - Aneuploidy, i.e., testing for chromosomal normality) using 2 different methods: NGS vs the new SNP array platform (LifeView) developed by my startup Genomic Prediction. 

The SNP array platform allows very accurate genotyping of each embryo at ~1 million locations in the genome, and the subsequent bioinformatic analysis produces a much more accurate prediction of chromosomal normality than the older methods. 

Millions of embryos are screened each year using PGT-A, about 60% of all IVF embryos in the US. 

Klaus Wiemer is the laborator director for POMA fertility near Seattle. He conducted this study independently, without informing Genomic Prediction. There are ~3000 embryos in the dataset, all biopsied at POMA and samples allocated to three testing labs A,B,C using the two different methods. The family demographics (e.g., maternal age) were similar in all three groups. Lab B is Genomic Prediction and A,C are two of the largest IVF testing labs in the world, using NGS.

The results imply lower false-positive rates, lower false-negative rates, and higher accuracy overall from our methods. These lead to a significantly higher pregnancy success rate.

The new technology has the potential to help millions of families all over the world.
 
Comparison of Outcomes from Concurrent Use of 3 Different PGT-A Laboratories 
Oct 18 2021 annual meeting of the American Society for Reproductive Medicine (ASRM) 
Klaus Wiemer, PhD

While Down Syndrome population incidence (i.e., in babies born) is only ~1 percent, the incidence of aneuploidy in embryos is much higher. Aneuploidy is more likely to result in a failed pregnancy than in the birth of a Downs baby -- e.g., because the embryo fails to implant, or does not develop properly during the pregnancy. 

False positives mean fewer healthy embryos available for transfer, while false negatives mean that problematic embryos are transferred. Both of these screening accuracies affect the overall pregnancy success rate.




Sunday, October 31, 2021

Demis Hassabis: Using AI to accelerate scientific discovery (protein folding) + Bonus: Bruno Pontecorvo

 


Recent talk (October 2021) by Demis Hassabis on the use of AI in scientific research. Second half of the talk is focused on protein folding. 

Below is part 2, by the AlphaFold research lead, which has more technical details.




Bonus: My former Oregon colleague David Strom recommended a CERN lecture by Frank Close on his biography of physicist (and atomic spy?) Bruno Pontecorvo.  David knew that The Battle of Algiers, which I blogged about recently, was directed by Gillo Pontecorvo, Bruno's brother.

Below is the closest thing I could find on YouTube -- it has better audio and video quality than the CERN talk. 

The amazing story of Bruno Pontecorvo involves topics such as the first nuclear reactions and reactors (work with Enrico Fermi), the Manhattan Project, neutrino flavors and oscillations, supernovae, atomic espionage, the KGB, Kim Philby, and the quote: 
I want to be remembered as a great physicist, not as your fucking spy!

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)




Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
 
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
 
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Blog Archive

Labels