Showing posts with label history of science. Show all posts
Showing posts with label history of science. Show all posts

Thursday, October 26, 2023

Paradise Lost - Migdal, Polyakov, and Landau

This is a placeholder for a longer post I hope to expand on in the future, based on this essay: 


Migdal and Polyakov were two of the great Soviet physicists of their generation. Polyakov is on the upper left and Migdal the lower right.




Wikipedia: Migdal, Polyakov

The essay describes their education as young physicists. They were examined by Landau himself at age 15, and by age 19 had written a paper anticipating the Higgs Mechanism and the role of spontaneous symmetry breaking in gauge theory.

Migdal: Khalat was a genius of political intrigue. Being married into Inner Circle of the Soviet System (his wife Valya is the daughter of a legendary Revolution hero), he used all his connections and all the means to achieve his secret goal — assemble the best brains and let them Think Freely. 
On the surface, his pitch to the Party went as follows. “The West is attacking us for anti-Semitism. The best way to counter this slander is to create an Institute, where Jews are accepted, allowed to travel abroad and generally look happy. This can be a very small Institute, by standards of Atomic Project, it will have no secret military research, it will cost you very little, but it will help “Rasryadka” (Détente). These Jews will be so happy, they will tell all their Jewish friends in the West how well they live. And if they won’t –it is after all, us who decide which one goes abroad and which one stays home. They are smart kids, they will figure out which side of the toast is buttered.” 
As I put it, Khalat sold half of his soul to Devil and used the money to save another half. I truly respect him for that, now once I learned what it takes to create a startup and try to protect it against hostile world. 
As many crazy plans before it, this plan really worked. Best brains were assembled in Landau Institute, they were given a chance to happily solve problems without being forced to eat political shit like the whole country and – yes, they sometimes traveled abroad and made friends in the West. 
In a way the plan worked too well — we became so worldly and so free that we could no longer be controlled. And, needless to say, our friends in the West became closer to us that our curators in KGB.
I was in the 1990s generation of American physicists who had to contend on the job market with a stream of great theorists from the former Soviet Union. Both Migdal and Polyakov ended up at Princeton, and there were many others in their wake, closer to my age.

Thursday, October 05, 2023

Yasheng Huang: China's Examination System and its impact on Politics, Economy, Innovation — Manifold #45

 

Yasheng Huang is the Epoch Foundation Professor of Global Economics and Management at the MIT Sloan School of Management. His new book is The Rise and Fall of the EAST: How Exams, Autocracy, Stability, and Technology Brought China Success, and Why They Might Lead to Its Decline. 

Steve and Yasheng discuss: 

0:00 Introduction 
1:11 From Beijing to Harvard in the 1980s 
15:29 Civil service exams and Huang's new book, "The Rise and Fall of the EAST" 
37:14 Two goals: Developing human capital and indoctrination 
48:33 Impact of the exam system 
57:04 China's innovation peak and decline 
1:12:23 Collaboration and relationship with the West 
1:21:31 How will the U.S.-China relationship evolve? 

Audio-only version, and transcript: 

Yasheng Huang at MIT 

Web site: 

Thursday, December 15, 2022

Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — Manifold #26

 

Geoffrey Miller is an American evolutionary psychologist, author, and a professor of psychology at the University of New Mexico. He is known for his research on sexual selection in human evolution. 


Miller's Wikipedia page.

Steve and Geoffrey discuss: 

0:00 Geoffrey Miller's background, childhood, and how he became interested in psychology 
14:44 How evolutionary psychology is perceived and where the field is going 
38:23 The value of higher education: sobering facts about retention 
49:00 Dating, pickup artists, and relationships 
1:11:27 Polyamory 
1:24:56 FTX, poly, and effective altruism 
1:34:31 AI alignment

Thursday, October 20, 2022

Discovering the Multiverse: Quantum Mechanics and Hugh Everett III, with Peter Byrne — Manifold #22

 

Peter Byrne is an investigative reporter and science writer based in Northern California. His popular biography, The Many Worlds of Hugh Everett III - Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family (Oxford University Press, 2010) was followed by publication of The Everett Interpretation of Quantum Mechanics, Collected Works 1957-1980, (Princeton University Press, 2012), co-edited with philosopher of science Jeffrey A. Barrett of UC Irvine. 

Everett's formulation of quantum mechanics, which implies the existence of a quantum multiverse, is favored by a significant (and growing) fraction of working physicists. 

Steve and Peter discuss: 

0:00 How Peter Byrne came to write a biography of Hugh Everett 
18:09 Everett’s personal life and groundbreaking thesis as a catalyst for the book 
24:00 Everett and Decoherence 
31:25 Reaction of other physicists to Everett’s many worlds theory 
40:46 Steve’s take on Everett’s many worlds theory 
43:41 Peter on the bifurcation of science and philosophy 
49:21 Everett’s post-academic life 
52:58 How Hugh Everett is remembered now 


References: 


Wednesday, September 28, 2022

The Future of Human Evolution -- excerpts from podcast interview with Brian Chau



1. The prospect of predicting cognitive ability from DNA, and the consequences. Why the main motivation has nothing to do with group differences. This segment begins at roughly 47 minutes. 

2. Anti-scientific resistance to research on the genetics of cognitive ability. My experience with the Jasons. Blank Slate-ism as a sacralized, cherished belief of social progressives. This segment begins at roughly 1 hour 7 minutes. 


1. Starts at roughly 47 minutes. 

Okay, let's just say hypothetically my billionaire friend is buddies with the CEO of 23andMe and let's say on the down low we collected some SAT scores of 1M or 2M people. I think there are about 10M people that have done 23andMe, let's suppose I manage to collect 1-2M scores for those people. I get them to opt in and agree to the study and da da da da and then Steve runs his algos and you get this nice predictor. 

But you’ve got to do it on the down low. Because if it leaks out that you're doing it, People are going to come for you. The New York Times is going to come for you, everybody's going to come for you. They're going to try to trash the reputation of 23andMe. They're going to trash the reputation of the billionaire. They're going to trash the reputation of the scientists who are involved in this. But suppose you get it done. And getting it done as you know very well is a simple run on AWS and you end up with this predictor which wow it's really complicated it depends on 20k SNPs in the genome ... 

For anybody with an ounce of intellectual integrity, they would look back at their copy of The Mismeasure of Man which has sat magisterially on their bookshelf since they were forced to buy it as a freshman at Harvard. They would say, “WOW! I guess I can just throw that in the trash right? I can just throw that in the trash.” 

But the set of people who have intellectual integrity and can process new information and then reformulate the opinion that they absorbed through social convention – i.e., that Gould is a good person and a good scientist and wise -- is tiny. The set of people who can actually do that is like 1% of the population. So you know maybe none of this matters, but in the long run it does matter. … 

Everything else about that hypothetical: the social scientists running the longitudinal study, getting the predictor in his grubby little hands and publishing the validation, but people trying to force you to studiously ignore the results, all that has actually already happened. We already have something which correlates ~0.4 with IQ. Everything else I said has already been done but it's just being studiously ignored by the right thinking people. 

 … 

Some people could misunderstand our discussion as being racist. I'm not saying that any of this has anything to do with group differences between ancestry groups. I'm just saying, e.g., within the white population of America, it is possible to predict from embryo DNA which of 2 brothers raised in the same family will be the smart one and which one will struggle in school. Which one will be the tall one and which one will be not so tall. 



2. Starts at roughly 1 hour 7 minutes. 

I've been in enough places where this kind of research is presented in seminar rooms and conferences and seen very negative attacks on the individuals presenting the results. 

I'll give you a very good example. There used to be a thing called the Jasons. During the cold war there was a group of super smart scientists called the Jasons. They were paid by the government to get together in the summers and think about technological issues that might be useful for defense and things like war fighting. … 

I had a meeting with the (current) Jasons. I was invited to a place near Stanford to address them about genetic engineering, genomics, and all this stuff. I thought okay these are serious scientists and I'll give them a very nice overview of the progress in this field. This anecdote takes place just a few years ago. 

One of the Jasons present is a biochemist but not an expert on genomics or machine learning. This biochemist asked me a few sharp questions which were easy to answer. But then at some point he just can't take it anymore and he grabs all his stuff and runs out of the room. ...

Monday, September 05, 2022

Lunar Society (Dwarkesh Patel) Interview

 

Dwarkesh did a fantastic job with this interview. He read the scientific papers on genomic prediction and his questions are very insightful. Consequently we covered the important material that people are most confused about. 

Don't let the sensationalistic image above deter you -- I highly recommend this podcast!

0:00:00 Intro 
0:00:49 Feynman’s advice on picking up women 
0:12:21 Embryo selection 
0:24:54 Why hasn't natural selection already optimized humans? 
0:34:48 Aging 
0:43:53 First Mover Advantage 
0:54:24 Genomics in dating 
1:01:06 Ancestral populations 
1:08:33 Is this eugenics? 
1:16:34 Tradeoffs to intelligence 
1:25:36 Consumer preferences 
1:30:49 Gwern 
1:35:10 Will parents matter? 
1:46:00 Wordcels and shape rotators 
1:58:04 Bezos and brilliant physicists 
2:10:58 Elite education 

If you prefer audio-only click here.

Sunday, June 12, 2022

Von Neumann: The Interaction of Mathematics and Computing, Stan Ulam 1976 talk (video)

 

Von Neumann: The Interaction of Mathematics and Computing, by Stan Ulam. 

See A History of Computing in the Twentieth Century, Edited by: N. METROPOLIS, J. HOWLETT and GIAN-CARLO ROTA.
 
More videos from the conference here. (Konrad Zuse!)

See at 50 minutes for an interesting story about von Neumann's role in the implosion mechanism for atomic bombs. vN apparently solved the geometrical problem for the shape of the explosive lens overnight after hearing a seminar on the topic. Still classified?
To solve this problem, the Los Alamos team planned to produce an “explosive lens”, a combination of different explosives with different shock wave speeds. When molded into the proper shape and dimensions, the high-speed and low-speed shock waves would combine with each other to produce a uniform concave pressure wave with no gaps. This inwardly-moving concave wave, when it reached the plutonium sphere at the center of the design, would instantly squeeze the metal to at least twice the density, producing a compressed ball of plutonium that contained about 5 times the necessary critical mass. A nuclear explosion would then result.
More here.

Thursday, February 03, 2022

ManifoldOne podcast Episode#2: Steve Hsu Q&A

 

Steve answers questions about recent progress in AI/ML prediction of complex traits from DNA, and applications in embryo selection. 

Highlights: 

1. Overview of recent advances in trait prediction 
2. Would cost savings from breast cancer early detection pay for genotyping of all women? 
3. How does IVF work? Economics of embryo selection 
4. Whole embryo genotyping increases IVF success rates (pregnancy per transfer) significantly 
5. Future predictions 


Some relevant scientific papers: 

Preimplantation Genetic Testing for Aneuploidy: New Methods and Higher Pregnancy Rates 

2021 review article on complex trait prediction 

Accurate Genomic Prediction of Human Height 

Genomic Prediction of 16 Complex Disease Risks Including Heart Attack, Diabetes, Breast and Prostate Cancer 

Genetic architecture of complex traits and disease risk predictors 

Sibling validation of polygenic risk scores and complex trait prediction 

Wednesday, January 26, 2022

Friday, December 31, 2021

Happy New Year 2022!

Best wishes to everyone :-)

I posted this video some years ago, but it was eventually taken down by YouTube. I came across it today and thought I would share it again. 

The documentary includes interviews with Rabi, Ulam, Bethe, Frank Oppenheimer, Robert Wilson, and Dyson


 


Some other recommendations below. I recently re-listened to these podcasts and quite enjoyed them. The interview with Bobby covers brain mapping, neuroscience, careers in science, biology vs physics. With Ted we go deep into free will, parallel universes, science fiction, and genetic engineering. Bruno shares his insights on geopolitics -- the emerging multipolar world of competition and cooperation between the US, Russia, Europe, and China.









A hopeful note for 2022 and the pandemic:

I followed COVID closely at the beginning (early 2020; search on blog if interested). I called the pandemic well before most people, and even provided some useful advice to a few big portfolio managers as well as to Dom and his team in the UK government. But once I realized that 

the average level among political leaders and "public intellectuals" is too low for serious cost-benefit analysis,

I got bored of COVID and stopped thinking about it.

However with Omicron (thanks to a ping from Dom) I started to follow events again. Preliminary data suggest we may be following the evolutionary path of increased transmissibility but reduced lethality. 

The data from UK and SA already seem to strongly support this conclusion, although both populations have at least one of: high vaccination level / resistance from spread of earlier variants. Whether Omicron is "intrinsically" less lethal (i.e., to a population such as the unvaccinated ~40% of the PRC population that has never been exposed to COVID) remains to be seen but we should know within a month or so.

If, e.g., Omicron results in hospitalization / death at ~1/3 the rate of earlier variants, then we will already be in the flu-like range of severity (whereas original COVID was at most like a ~10x more severe flu). In this scenario rational leaders should just go for herd immunity (perhaps with some cocooning of vulnerable sub-populations) and get it over with.

I'll be watching some of the more functional countries like S. Korea, PRC, etc. to see when/if they relax their strict lockdown and quarantine policies. Perhaps there are some smaller EU countries to keep an eye on as well.

Sunday, November 14, 2021

Has Hawking's Black Hole Information Paradox Been Resolved?



In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
https://arxiv.org/abs/hep-th/9209058 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.


These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 


BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.


3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results described at bottom, it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 


This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
https://arxiv.org/abs/2012.05770 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 

The discussion below suggests that each internal BH state described by the coefficients { c_n } results in a different final radiation state -- i.e., the process can be unitary.





Note Added

In the comments David asks about the results described in this 2020 Quanta article The Most Famous Paradox in Physics Nears Its End

I thought about discussing those results in the post, but 1. it was already long, and 2. they are using a very different AdS approach. 

However, Raju does discuss these papers in his review. 

Most of the theorists in the Quanta article accept the basic formulation of the information paradox, so it's surprising to them that they see indications of unitary black hole evaporation. As I mentioned in the post I don't think the paradox itself is well-established, so I am not surprised. 

I think that the quantum hair results are important because they show explicitly that the internal state of the hole affects the quantum state of the graviton field, which then influences the Hawking radiation production. 

It was pointed out by Papadodimos and Raju, and also in my 2013 paper arXiv:1308.5686, that tiny correlations in the radiation density matrix could purify it. That is, the Hawking density matrix plus exp(-S) corrections (which everyone expects are there) could result from a pure state in the large Hilbert space Y, which has dimensionality ~ exp(+S). This is related to what I wrote in the post: start with a pure state in Y and trace over the complement of X. The resulting ρ(X) is exponentially close to thermal (maximally mixed) even though it came from a pure state.

Sunday, October 31, 2021

Demis Hassabis: Using AI to accelerate scientific discovery (protein folding) + Bonus: Bruno Pontecorvo

 


Recent talk (October 2021) by Demis Hassabis on the use of AI in scientific research. Second half of the talk is focused on protein folding. 

Below is part 2, by the AlphaFold research lead, which has more technical details.




Bonus: My former Oregon colleague David Strom recommended a CERN lecture by Frank Close on his biography of physicist (and atomic spy?) Bruno Pontecorvo.  David knew that The Battle of Algiers, which I blogged about recently, was directed by Gillo Pontecorvo, Bruno's brother.

Below is the closest thing I could find on YouTube -- it has better audio and video quality than the CERN talk. 

The amazing story of Bruno Pontecorvo involves topics such as the first nuclear reactions and reactors (work with Enrico Fermi), the Manhattan Project, neutrino flavors and oscillations, supernovae, atomic espionage, the KGB, Kim Philby, and the quote: 
I want to be remembered as a great physicist, not as your fucking spy!

Saturday, October 30, 2021

Slowed canonical progress in large fields of science (PNAS)




Sadly, the hypothesis described below is very plausible. 

The exception being that new tools or technological breakthroughs, especially those that can be validated relatively easily (e.g., by individual investigators or small labs), may still spread rapidly due to local incentives. CRISPR and Deep Learning are two good examples.
 
New theoretical ideas and paradigms have a much harder time in large fields dominated by mediocre talents: career success is influenced more by social dynamics than by real insight or capability to produce real results.
 
Slowed canonical progress in large fields of science 
Johan S. G. Chu and James A. Evans 
PNAS October 12, 2021 118 (41) e2021636118 
Significance The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas. 
Abstract In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
See also Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character. 
In physics it is said that successful new theories swallow their predecessors whole. That is, even revolutionary new theories (e.g., special relativity or quantum mechanics) reduce to their predecessors in the previously studied circumstances (e.g., low velocity, macroscopic objects). Swallowing whole is a sign of proper function -- it means the previous generation of scientists was competent: what they believed to be true was (at least approximately) true. Their models were accurate in some limit and could continue to be used when appropriate (e.g., Newtonian mechanics). 
In some fields (not to name names!) we don't see this phenomenon. Rather, we see new paradigms which wholly contradict earlier strongly held beliefs that were predominant in the field* -- there was no range of circumstances in which the earlier beliefs were correct. We might even see oscillations of mutually contradictory, widely accepted paradigms over decades. 
It takes a serious interest in the history of science (and some brainpower) to determine which of the two regimes above describes a particular area of research. I believe we have good examples of both types in the academy. 
* This means the earlier (or later!) generation of scientists in that field was incompetent. One or more of the following must have been true: their experimental observations were shoddy, they derived overly strong beliefs from weak data, they allowed overly strong priors to determine their beliefs.

Tuesday, September 07, 2021

Kathryn Paige Harden Profile in The New Yorker (Behavior Genetics)

This is a good profile of behavior geneticist Paige Harden (UT Austin professor of psychology, former student of Eric Turkheimer), with a balanced discussion of polygenic prediction of cognitive traits and the culture war context in which it (unfortunately) exists.
Can Progressives Be Convinced That Genetics Matters? 
The behavior geneticist Kathryn Paige Harden is waging a two-front campaign: on her left are those who assume that genes are irrelevant, on her right those who insist that they’re everything. 
Gideon Lewis-Kraus
Gideon Lewis-Kraus is a talented writer who also wrote a very nice article on the NYTimes / Slate Star Codex hysteria last summer.

Some references related to the New Yorker profile:
1. The paper Harden was attacked for sharing while a visiting scholar at the Russell Sage Foundation: Game Over: Genomic Prediction of Social Mobility 

2. Harden's paper on polygenic scores and mathematics progression in high school: Genomic prediction of student flow through high school math curriculum 

3. Vox article; Turkheimer and Harden drawn into debate including Charles Murray and Sam Harris: Scientific Consensus on Cognitive Ability?

A recent talk by Harden, based on her forthcoming book The Genetic Lottery: Why DNA Matters for Social Equality



Regarding polygenic prediction of complex traits 

I first met Eric Turkheimer in person (we had corresponded online prior to that) at the Behavior Genetics Association annual meeting in 2012, which was back to back with the International Conference on Quantitative Genetics, both held in Edinburgh that year (photos and slides [1] [2] [3]). I was completely new to the field but they allowed me to give a keynote presentation (if memory serves, together with Peter Visscher). Harden may have been at the meeting but I don't recall whether we met. 

At the time, people were still doing underpowered candidate gene studies (there were many talks on this at BGA although fewer at ICQG) and struggling to understand GCTA (Visscher group's work showing one can estimate heritability from modestly large GWAS datasets, results consistent with earlier twins and adoption work). Consequently a theoretical physicist talking about genomic prediction using AI/ML and a million genomes seemed like an alien time traveler from the future. Indeed, I was.

My talk is largely summarized here:
On the genetic architecture of intelligence and other quantitative traits 
https://arxiv.org/abs/1408.3421 
How do genes affect cognitive ability or other human quantitative traits such as height or disease risk? Progress on this challenging question is likely to be significant in the near future. I begin with a brief review of psychometric measurements of intelligence, introducing the idea of a "general factor" or g score. The main results concern the stability, validity (predictive power), and heritability of adult g. The largest component of genetic variance for both height and intelligence is additive (linear), leading to important simplifications in predictive modeling and statistical estimation. Due mainly to the rapidly decreasing cost of genotyping, it is possible that within the coming decade researchers will identify loci which account for a significant fraction of total g variation. In the case of height analogous efforts are well under way. I describe some unpublished results concerning the genetic architecture of height and cognitive ability, which suggest that roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation. Using results from Compressed Sensing (L1-penalized regression), I estimate the statistical power required to characterize both linear and nonlinear models for quantitative traits. The main unknown parameter s (sparsity) is the number of loci which account for the bulk of the genetic variation. The required sample size is of order 100s, or roughly a million in the case of cognitive ability.
The predictions in my 2012 BGA talk and in the 2014 review article above have mostly been validated. Research advances often pass through the following phases of reaction from the scientific community:
1. It's wrong ("genes don't affect intelligence! anyway too complex to figure out... we hope")
2. It's trivial ("ofc with lots of data you can do anything... knew it all along")
3. I did it first ("please cite my important paper on this")
Or, as sometimes attributed to Gandhi: "First they ignore you, then they laugh at you, then they fight you, then you win.”



Technical note

In 2014 I estimated that ~1 million genotype | phenotype pairs would be enough to capture most of the common SNP heritability for height and cognitive ability. This was accomplished for height in 2017. However, the sample size of well-phenotyped individuals is much smaller for cognitive ability, even in 2021, than for height in 2017. For example, in UK Biobank the cognitive test is very brief (~5 minutes IIRC, a dozen or so questions), but it has not even been administered to the full cohort as yet. In the Educational Attainment studies the phenotype EA is only moderately correlated (~0.3 ?) or so with actual cognitive ability.

Hence, although the most recent EA4 results use 3 million individuals [1], and produce a predictor which correlates ~0.4 with actual EA, the statistical power available is still less than what I predicted would be required to train a really good cognitive ability predictor.

In our 2017 height paper, which also briefly discussed bone density and cognitive ability prediction, we built a cognitve ability predictor roughly as powerful as EA3 using only ~100k individuals with the noisy UKB test data. So I remain confident that  ~million individuals with good cognitive scores (e.g., SAT, AFQT, full IQ test) would deliver results far beyond what we currently have available. We also found that our predictor, built using actual (albeit noisy) cognitive scores exhibits less power reduction in within-family (sibling) analyses compared to EA. So there is evidence that (no surprise) EA is more influenced by environmental factors, including so-called genetic nurture effects, than is cognitive ability.

A predictor which captures most of the common SNP heritability for cognitive ability might correlate ~0.5 or 0.6 with actual ability. Applications of this predictor in, e.g., studies of social mobility or educational success or even longevity using existing datasets would be extremely dramatic.

Wednesday, August 11, 2021

Ten Years of Quantum Coherence and Decoherence


In 2010 I attended a meeting on Quantum Coherence and Decoherence in Benasque, Spain. I've reproduced part of my original blog post on the meeting below.

 

September 13, 2010  
Here are the slides for my talk today at Benasque: On the origin of probability in quantum mechanics.

At the end I took a poll of the workshop participants and found that over half agreed with the following statement. About 20 percent were strongly opposed. Note this is a meeting on quantum coherence and decoherence, so there are a lot of practical types here, including experimentalists.

It is plausible (but of course unproven) that unitary evolution of a pure state in a closed system can reproduce, for semi-classical creatures inside the system, all of the phenomenology of the Copenhagen interpretation.

As one insightful participant pointed out while I was taking the poll, this is really a mathematical question (if not entirely well-posed), not a physics question.

My recent paper with Roman Buniy: Macroscopic Superpositions in Isolated Systems answers the mathematical question about the dynamics of complex isolated systems under Schrodinger evolution. I had forgotten entirely about the poll in the intervening years (I only came across the blog post by accident recently), but the question persisted... Only in 2020 did I realize that von Neumann's Quantum Ergodic Theorem [1] [2] can be used to prove the result.



Some Benasque photos from 2010 :-)







Added from comments

There are really multiple issues here. Theorists will differ in their opinions on the following questions: 
 
1. (largely mathematical): Does the phenomenology of pure state evolution in a closed system (e.g., the universe) reproduce Copenhagen for observers in the system? 

This is a question about dynamical evolution: of the system as a whole, and of various interacting subsystems. It's not a philosophical question and, in my opinion, it is what theorists should focus on first. Although complicated, it is still reasonably well-posed from a mathematical perspective, at least as far as foundational physics questions go. 

I believe the evidence is strong that the answer to #1 is Yes, although the issue of the Born rule lingers (too complicated to discuss here, but see various papers I have written on the topic, along with other people like Deutsch, Zurek, etc.). It is clear from Weinberg's writing that he and I agree that the answer is Yes, modulo the Born rule. 

Define this position to be 

Y* := "Yes, possibly modulo Born" 

There are some theorists who do not agree with Y* (see the survey results above), but they are mostly people who have not thought it through carefully, in my opinion. I don't know of any explicit arguments for how Y* fails, and our recent results applying the vN QET strengthen my confidence in Y*. 

I believe (based on published remarks or from my personal interactions) that the following theorists have opinions that are Y* or stronger: Schwinger, DeWitt, Wheeler, Deutsch, Hawking, Feynman, Gell-Mann, Zeh, Hartle, Weinberg, Zurek, Guth, Preskill, Page, Cooper (BCS), Coleman, Misner, Arkani-Hamed, etc. 

But there is a generational issue, with many older (some now deceased!) theorists being reticent about expressing Y* even if they believe it. This is shifting over time and, for example, a poll of younger string theorists or quantum cosmologists would likely find a strong majority expressing Y*. 

[ Social conformity and groupthink are among the obstacles preventing broader understanding of question #1. That is, in part, why I have listed specific high profile individuals as having reached the unconventional but correct view! ]


2. Does this make you confident that the other branches really "exist"? They are "real"? 

Here we get into philosophical questions and you will get a range of answers. 

Many of the Y* theorists (including me) might say:

a. MW is the only logically complete version of QM we have. Copenhagen is not well-defined and inadequate for cosmology (cf density perturbations from inflation and galaxy formation). 

b. I find the existence of the other branches rather extravagant, and I leave open the possibility that there might be some more fundamental modification of QM that changes everything. But I have no idea what that model looks like and there are strong constraints on its properties from Bell, causality, etc. Even a small amount of nonlinearity in the Schrodinger equation leads to lots of causality violation, etc. etc. 
 
c. I believe that any practical experiment that tries to check whether unitary evolution always holds (i.e., the other branches are *in principle accessible*) will always find it to be the case. In particular this means we will realize and manipulate more and more complicated superposition states over time, and this raises the question of why you and I cannot be in a superposition state right now... 

Note it is possible that only one single decoherent branch of the universal wavefunction is actually realized by Nature ("is real"), and that quantum randomness is an illusion. Hartle and Gell-Mann were sort of hedging this way in some of their last papers on this topic. But remember Gell-Mann even hedged about the reality of quarks before they were directly observed in deep inelastic scattering. 

An aspect to this problem that few theorists appreciate is that a quantum theory of gravity is, at the global level, "timeless": it should be a theory of quantum amplitudes describing an entire spacetime geometry and quantum trajectories of other degrees of freedom on that manifold. As such the many branches of the universal wavefunction are realized "all at once" and concepts like observers must be emergent -- they cannot be fundamental aspects of the theory itself. 

Most of the action in quantum gravity (i.e., strings or loop qg) has been "local" in nature: what are the stringy excitations, compactification, local vacua, etc. The global wavefunction of the universe was already considered by Wheeler and DeWitt but there are still lots of unresolved issues.

Thursday, July 22, 2021

Embryo Screening for Polygenic Disease Risk: Recent Advances and Ethical Considerations (Genes 2021 Special Issue)



It is a great honor to co-author a paper with Simon Fishel, the last surviving member of the team that produced the first IVF baby (Louise Brown) in 1978. His mentors and collaborators were Robert Edwards (Nobel Prize 2010) and Patrick Steptoe (passed before 2010). In the photo above, of the very first scientific conference on In Vitro Fertilization (1981), Fishel (far right), Steptoe, and Edwards are in the first row. More on Simon and his experiences as a medical pioneer below. 

This article appears in a Special Issue: Application of Genomic Technology in Disease Outcome Prediction.
Embryo Screening for Polygenic Disease Risk: Recent Advances and Ethical Considerations 
L. Tellier, J. Eccles, L. Lello, N. Treff, S. Fishel, S. Hsu 
Genes 2021, 12(8), 1105 
https://doi.org/10.3390/genes12081105 
Machine learning methods applied to large genomic datasets (such as those used in GWAS) have led to the creation of polygenic risk scores (PRSs) that can be used identify individuals who are at highly elevated risk for important disease conditions, such as coronary artery disease (CAD), diabetes, hypertension, breast cancer, and many more. PRSs have been validated in large population groups across multiple continents and are under evaluation for widespread clinical use in adult health. It has been shown that PRSs can be used to identify which of two individuals is at a lower disease risk, even when these two individuals are siblings from a shared family environment. The relative risk reduction (RRR) from choosing an embryo with a lower PRS (with respect to one chosen at random) can be quantified by using these sibling results. New technology for precise embryo genotyping allows more sophisticated preimplantation ranking with better results than the current method of selection that is based on morphology. We review the advances described above and discuss related ethical considerations.
I excerpt from the paper below. 

Some related links: 





Introduction:
Over a million babies are born each year via IVF [1,2]. It is not uncommon for IVF parents to have more than one viable embryo from which to choose, as typical IVF cycles can produce four or five. The embryo that is transferred may become their child, while the others might not be used at all. We refer to this selection problem as the “embryo choice problem”. In the past, selections were made based on criteria such as morphology (i.e., rate of development, symmetry, general appearance) and chromosomal normality as determined by aneuploidy testing. 
Recently, large datasets of human genomes together with health and disease histories have become available to researchers in computational genomics [3]. Statistical methods from machine learning have allowed researchers to build risk predictors (e.g., for specific disease conditions or related quantitative traits, such as height or longevity) that use the genotype alone as input information. Combined with the precision genotyping of embryos, these advances provide significantly more information that can be used for embryo selection to IVF parents. 
In this brief article, we provide an overview of the advances in genotyping and computational genomics that have been applied to embryo selection. We also discuss related ethical issues, although a full discussion of these would require a much longer paper. ...

 Ethical considerations:

For further clarification, we explore a specific scenario involving breast cancer. It is well known that monogenic BRCA1 and BRCA2 variants predispose women to breast cancer, but this population is small—perhaps a few per thousand in the general population. The subset of women who do not carry a BRCA1 or BRCA2 risk variant but are at high polygenic risk is about ten times as large as the BRCA1/2 group. Thus, the majority of breast cancer can be traced to polygenic causes in comparison with commonly tested monogenic variants. 
For BRCA carrier families, preimplantation screening against BRCA is a standard (and largely uncontroversial) recommendation [39]. The new technologies discussed here allow a similar course of action for the much larger set of families with breast cancer history who are not carriers of BRCA1 or BRCA2. They can screen their embryos in favor of a daughter whose breast cancer PRS is in the normal range, avoiding a potentially much higher absolute risk of the condition. 
The main difference between monogenic BRCA screening and the new PRS screening against breast cancer is that the latter technology can help an order of magnitude more families. From an ethical perspective, it would be unconscionable to deny PRS screening to BRCA1/2-negative families with a history of breast cancer. ...

 

On Simon Fishel's experiences as an IVF pioneer (see here):

Today millions of babies are produced through IVF. In most developed countries roughly 3-5 percent of all births are through IVF, and in Denmark the fraction is about 10 percent! But when the technology was first introduced with the birth of Louise Brown in 1978, the pioneering scientists had to overcome significant resistance. There may be an alternate universe in which IVF was not allowed to develop, and those millions of children were never born. 

Wikipedia: ...During these controversial early years of IVF, Fishel and his colleagues received extensive opposition from critics both outside of and within the medical and scientific communities, including a civil writ for murder.[16] Fishel has since stated that "the whole establishment was outraged" by their early work and that people thought that he was "potentially a mad scientist".[17] 

I predict that within 5 years the use of polygenic risk scores will become common in some health systems (i.e., for adults) and in IVF. Reasonable people will wonder why the technology was ever controversial at all, just as in the case of IVF.

Figure below from our paper. EHS = Embryo Health Score. 

Monday, July 19, 2021

The History of the Planck Length and the Madness of Crowds

I had forgotten about the 2005-06 email correspondence reproduced below, but my collaborator Xavier Calmet reminded me of it today and I was able to find these messages.

The idea of a minimal length of order the Planck length, arising due to quantum gravity (i.e., quantum fluctuations in the structure of spacetime), is now widely accepted by theoretical physicists. But as Professor Mead (University of Minnesota, now retired) elaborates, based on his own experience, it was considered preposterous for a long time. 

Large groups of people can be wrong for long periods of time -- in financial markets, academia, even theoretical physics. 

Our paper, referred to by Mead, is 

Minimum Length from Quantum Mechanics and Classical General Relativity 

X. Calmet, M. Graesser, and S. Hsu  

https://arxiv.org/abs/hep-th/0405033  

Phys Rev Letters Vol. 93, 21101 (2004)

The related idea, first formulated by R. Buniy, A. Zee, and myself, that the structure of Hilbert Space itself is likely discrete (or "granular") at some fundamental level, is currently considered preposterous, but time will tell. 

More here

At bottom I include a relevant excerpt from correspondence with Freeman Dyson in 2005.


Dear Drs. Calmet, Graesser, Hsu,

I read with interest your article in Phys Rev Letters Vol. 93, 21101 (2004), and was pleasantly surprised to see my 1964 paper cited (second citation of your ref. 1).  Not many people have cited this paper, and I think it was pretty much forgotten the day it was published, & has remained so ever since.  To me, your paper shows again that, no matter how one looks at it, one runs into problems trying to measure a distance (or synchronize clocks) with greater accuracy than the Planck length (or time).

I feel rather gratified that the physics community, which back then considered the idea of the Planck length as a fundamental limitation to be quite preposterous, has since come around to (more or less) my opinion.  Obviously, I deserve ZERO credit for this, since I'm sure that the people who finally reached this conclusion, whoever they were, were unaware of my work.  To me, this is better than if they had been influenced by me, since it's good to know that the principles of physics lead to this conclusion, rather than the influence of an individual.  I hope that makes sense. ...

You might be amused by one story about how I finally got the (first) paper published after 5 years of referee problems.  A whole series of referees had claimed that my eq. (1), which is related to your eq. (1), could not be true.  I suspect that they just didn't want to read any further.  Nothing I could say would convince them, though I'm sure you would agree that the result is transparently obvious.  So I submitted another paper which consisted of nothing but a lengthy detailed proof of eq. (1), without mentioning the connection with the gravitation paper.  The referees of THAT paper rejected it on the grounds that the result was trivially obvious!!  When I pointed out this discrepancy to the editors, I got the gravitation paper reconsidered and eventually published.

But back then no one considered the Planck length to be a candidate as a fundamental limitation.  Well, almost no one.  I did receive support from Henry Primakoff, David Bohm, and Roger Penrose.  As far as I can recall, these were the only theoretical physicists of note who were willing to take this idea seriously (and I talked to many, in addition to reading the reports of all the referees).

Well anyway, I greet you, thank you for your paper and for the citation, and hope you haven't found this e-mail too boring.

Yours Sincerely,

C.  Alden  Mead


Dear Dr. Mead,

Thank you very much for your email message. It is fascinating to learn the history behind your work. We found your paper to be clearly written and useful.

Amusingly, we state at the beginning of our paper something like "it is widely believed..." that there is a fundamental Planck-length limit. I am sure your paper made a contribution to this change in attitude. The paper is not obscure as we were able to find it without much digging.

Your story about the vicissitudes of publishing rings true to me. I find such stories reassuring given the annoying obstacles we all face in trying to make our little contributions to science.

Finally, we intend to have a look at your second paper. Perhaps we will find another interesting application of your ideas.

Warm regards,

Stephen Hsu

Xavier Calmet

Michael Graesser

 

Dear Steve,

Many thanks for your kind reply.  I find the information quite interesting, though as you say it leaves some historical questions unanswered.  I think that Planck himself arrived at his length by purely dimensional considerations, and he supposedly considered this very important.

As you point out, it's physically very reasonable, perhaps more so in view of more recent developments.  It seemed physically reasonable to me back in 1959, but not to most of the mainstream theorists of the time.

I think that physical considerations (such as yours and mine) and mathematical ones should support and complement each other.  The Heisenberg-Bohr thought experiments tell us what a correct mathematical formalism should provide, and the formal quantum mechanics does this and, of course, much more.  Same with the principle of equivalence and general relativity.  Now, the physical ideas regarding the Planck length & time may serve as a guide in constructing a satisfactory formalism.  Perhaps string theory will prove to be the answer, but I must admit that I'm ignorant of all details of that theory.

Anyway, I'm delighted to correspond with all of you as much as you wish, but I emphasize that I don't want to be intrusive or become a nuisance.

As my wife has written you (her idea, not mine), your e-mail was a nice birthday present.

Kindest Regards, Alden


See also this letter from Mead which appeared in Physics Today.  


The following is from Freeman Dyson:
 ... to me the most interesting is the discrete Hilbert Space paper, especially your reference [2] proving that lengths cannot be measured with error smaller than the Planck length. I was unaware of this reference but I had reached the same conclusion independently.

 

Sunday, May 02, 2021

40 Years of Quantum Computation and Quantum Information


This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders. 
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role. 
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War. 
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time. 
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ... 
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles. 
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves. 
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics. 
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ] 

Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...

Feynman's 1981 lecture Simulating Physics With Computers.

Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.



Related: 


My paper on the Margolus-Levitin Theorem in light of gravity: 

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle. 

Participants in the 1981 meeting:
 

Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)

Blog Archive

Labels