Showing posts with label intellectual history. Show all posts
Showing posts with label intellectual history. Show all posts

Thursday, April 13, 2023

Katherine Dee: Culture, Identity, and Isolation in the Digital Age — Manifold #33

 

Katherine Dee is a writer, journalist, and internet historian. 

Steve and Katherine discuss: 

0:00 Introduction 
1:15 Katherine’s early life and background 
21:52 Mass shootings, Manifestos, Nihilism, and Incels 
59:35 Trad values, Sex negativity vs Porn and Fleshlights 
1:28:54 Elon Musk’s plans for Twitter 
1:33:00 TikTok 
1:41:41 Adderall 
1:44:07 AI/GPT impact on writers and journos 
1:49:30 Gen-X generation gap: are the kids alright? 

Audio and Transcript: 

Katherine’s Substack: 

“Mass Shootings and the World Liberalism Made” 

Thursday, January 19, 2023

Dominic Cummings: Vote Leave, Brexit, COVID, and No. 10 with Boris — Manifold #28

 

Dominic Cummings is a major historical figure in UK politics. He helped save the Pound Sterling, led the Vote Leave campaign, Got Brexit Done, and guided the Tories to a landslide general election victory. His time in No. 10 Downing Street as Boris Johnson's Chief Advisor was one of the most interesting and impactful periods in modern UK political history.  Dom and Steve discuss all of this and more in this 2-hour episode. 

0:00 Early Life: Oxford, Russia, entering politics 
16:49 Keeping the UK out of the Euro 
19:41 How Dominic and Steve became acquainted: blogs, 2008 financial crisis, meeting at Google 
27:37 Vote Leave, the science of polling 
43:46 Cambridge Analytica conspiracy; History is impossible 
48:41 Dominic on Benedict Cumberbatch’s portrayal of him and the movie “Brexit: The Uncivil War” 
54:05 On joining British Prime Minister Boris Johnson’s office: an ultimatum 
1:06:31 The pandemic 
1:21:28 The Deep State, talent pipeline for public service 
1:47:25 Quants and weirdos invade No.10 
1:52:06 Can the Tories win the next election? 
1:56:27 Trump in 2024? 



References: 

Dominic's Substack newsletter: https://dominiccummings.substack.com/

Thursday, December 15, 2022

Geoffrey Miller: Evolutionary Psychology, Polyamorous Relationships, and Effective Altruism — Manifold #26

 

Geoffrey Miller is an American evolutionary psychologist, author, and a professor of psychology at the University of New Mexico. He is known for his research on sexual selection in human evolution. 


Miller's Wikipedia page.

Steve and Geoffrey discuss: 

0:00 Geoffrey Miller's background, childhood, and how he became interested in psychology 
14:44 How evolutionary psychology is perceived and where the field is going 
38:23 The value of higher education: sobering facts about retention 
49:00 Dating, pickup artists, and relationships 
1:11:27 Polyamory 
1:24:56 FTX, poly, and effective altruism 
1:34:31 AI alignment

Thursday, October 20, 2022

Discovering the Multiverse: Quantum Mechanics and Hugh Everett III, with Peter Byrne — Manifold #22

 

Peter Byrne is an investigative reporter and science writer based in Northern California. His popular biography, The Many Worlds of Hugh Everett III - Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family (Oxford University Press, 2010) was followed by publication of The Everett Interpretation of Quantum Mechanics, Collected Works 1957-1980, (Princeton University Press, 2012), co-edited with philosopher of science Jeffrey A. Barrett of UC Irvine. 

Everett's formulation of quantum mechanics, which implies the existence of a quantum multiverse, is favored by a significant (and growing) fraction of working physicists. 

Steve and Peter discuss: 

0:00 How Peter Byrne came to write a biography of Hugh Everett 
18:09 Everett’s personal life and groundbreaking thesis as a catalyst for the book 
24:00 Everett and Decoherence 
31:25 Reaction of other physicists to Everett’s many worlds theory 
40:46 Steve’s take on Everett’s many worlds theory 
43:41 Peter on the bifurcation of science and philosophy 
49:21 Everett’s post-academic life 
52:58 How Hugh Everett is remembered now 


References: 


Wednesday, September 28, 2022

The Future of Human Evolution -- excerpts from podcast interview with Brian Chau



1. The prospect of predicting cognitive ability from DNA, and the consequences. Why the main motivation has nothing to do with group differences. This segment begins at roughly 47 minutes. 

2. Anti-scientific resistance to research on the genetics of cognitive ability. My experience with the Jasons. Blank Slate-ism as a sacralized, cherished belief of social progressives. This segment begins at roughly 1 hour 7 minutes. 


1. Starts at roughly 47 minutes. 

Okay, let's just say hypothetically my billionaire friend is buddies with the CEO of 23andMe and let's say on the down low we collected some SAT scores of 1M or 2M people. I think there are about 10M people that have done 23andMe, let's suppose I manage to collect 1-2M scores for those people. I get them to opt in and agree to the study and da da da da and then Steve runs his algos and you get this nice predictor. 

But you’ve got to do it on the down low. Because if it leaks out that you're doing it, People are going to come for you. The New York Times is going to come for you, everybody's going to come for you. They're going to try to trash the reputation of 23andMe. They're going to trash the reputation of the billionaire. They're going to trash the reputation of the scientists who are involved in this. But suppose you get it done. And getting it done as you know very well is a simple run on AWS and you end up with this predictor which wow it's really complicated it depends on 20k SNPs in the genome ... 

For anybody with an ounce of intellectual integrity, they would look back at their copy of The Mismeasure of Man which has sat magisterially on their bookshelf since they were forced to buy it as a freshman at Harvard. They would say, “WOW! I guess I can just throw that in the trash right? I can just throw that in the trash.” 

But the set of people who have intellectual integrity and can process new information and then reformulate the opinion that they absorbed through social convention – i.e., that Gould is a good person and a good scientist and wise -- is tiny. The set of people who can actually do that is like 1% of the population. So you know maybe none of this matters, but in the long run it does matter. … 

Everything else about that hypothetical: the social scientists running the longitudinal study, getting the predictor in his grubby little hands and publishing the validation, but people trying to force you to studiously ignore the results, all that has actually already happened. We already have something which correlates ~0.4 with IQ. Everything else I said has already been done but it's just being studiously ignored by the right thinking people. 

 … 

Some people could misunderstand our discussion as being racist. I'm not saying that any of this has anything to do with group differences between ancestry groups. I'm just saying, e.g., within the white population of America, it is possible to predict from embryo DNA which of 2 brothers raised in the same family will be the smart one and which one will struggle in school. Which one will be the tall one and which one will be not so tall. 



2. Starts at roughly 1 hour 7 minutes. 

I've been in enough places where this kind of research is presented in seminar rooms and conferences and seen very negative attacks on the individuals presenting the results. 

I'll give you a very good example. There used to be a thing called the Jasons. During the cold war there was a group of super smart scientists called the Jasons. They were paid by the government to get together in the summers and think about technological issues that might be useful for defense and things like war fighting. … 

I had a meeting with the (current) Jasons. I was invited to a place near Stanford to address them about genetic engineering, genomics, and all this stuff. I thought okay these are serious scientists and I'll give them a very nice overview of the progress in this field. This anecdote takes place just a few years ago. 

One of the Jasons present is a biochemist but not an expert on genomics or machine learning. This biochemist asked me a few sharp questions which were easy to answer. But then at some point he just can't take it anymore and he grabs all his stuff and runs out of the room. ...

Monday, September 05, 2022

Lunar Society (Dwarkesh Patel) Interview

 

Dwarkesh did a fantastic job with this interview. He read the scientific papers on genomic prediction and his questions are very insightful. Consequently we covered the important material that people are most confused about. 

Don't let the sensationalistic image above deter you -- I highly recommend this podcast!

0:00:00 Intro 
0:00:49 Feynman’s advice on picking up women 
0:12:21 Embryo selection 
0:24:54 Why hasn't natural selection already optimized humans? 
0:34:48 Aging 
0:43:53 First Mover Advantage 
0:54:24 Genomics in dating 
1:01:06 Ancestral populations 
1:08:33 Is this eugenics? 
1:16:34 Tradeoffs to intelligence 
1:25:36 Consumer preferences 
1:30:49 Gwern 
1:35:10 Will parents matter? 
1:46:00 Wordcels and shape rotators 
1:58:04 Bezos and brilliant physicists 
2:10:58 Elite education 

If you prefer audio-only click here.

Wednesday, January 26, 2022

Friday, December 31, 2021

Happy New Year 2022!

Best wishes to everyone :-)

I posted this video some years ago, but it was eventually taken down by YouTube. I came across it today and thought I would share it again. 

The documentary includes interviews with Rabi, Ulam, Bethe, Frank Oppenheimer, Robert Wilson, and Dyson


 


Some other recommendations below. I recently re-listened to these podcasts and quite enjoyed them. The interview with Bobby covers brain mapping, neuroscience, careers in science, biology vs physics. With Ted we go deep into free will, parallel universes, science fiction, and genetic engineering. Bruno shares his insights on geopolitics -- the emerging multipolar world of competition and cooperation between the US, Russia, Europe, and China.









A hopeful note for 2022 and the pandemic:

I followed COVID closely at the beginning (early 2020; search on blog if interested). I called the pandemic well before most people, and even provided some useful advice to a few big portfolio managers as well as to Dom and his team in the UK government. But once I realized that 

the average level among political leaders and "public intellectuals" is too low for serious cost-benefit analysis,

I got bored of COVID and stopped thinking about it.

However with Omicron (thanks to a ping from Dom) I started to follow events again. Preliminary data suggest we may be following the evolutionary path of increased transmissibility but reduced lethality. 

The data from UK and SA already seem to strongly support this conclusion, although both populations have at least one of: high vaccination level / resistance from spread of earlier variants. Whether Omicron is "intrinsically" less lethal (i.e., to a population such as the unvaccinated ~40% of the PRC population that has never been exposed to COVID) remains to be seen but we should know within a month or so.

If, e.g., Omicron results in hospitalization / death at ~1/3 the rate of earlier variants, then we will already be in the flu-like range of severity (whereas original COVID was at most like a ~10x more severe flu). In this scenario rational leaders should just go for herd immunity (perhaps with some cocooning of vulnerable sub-populations) and get it over with.

I'll be watching some of the more functional countries like S. Korea, PRC, etc. to see when/if they relax their strict lockdown and quarantine policies. Perhaps there are some smaller EU countries to keep an eye on as well.

Sunday, November 14, 2021

Has Hawking's Black Hole Information Paradox Been Resolved?



In 1976 Stephen Hawking argued that black holes cause pure states to evolve into mixed states. Put another way, quantum information that falls into a black hole does not escape in the form of radiation. Rather, it vanishes completely from our universe, thereby violating a fundamental property of quantum mechanics called unitarity. 

These are bold statements, and they were not widely understood for decades. As a graduate student at Berkeley in the late 1980s, I tried to read Hawking’s papers on this subject, failed to understand them, and failed to find any postdocs or professors in the particle theory group who could explain them to me. 

As recounted in Lenny Susskind’s book The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, he and Gerard ‘t Hooft began to appreciate the importance of black hole information in the early 1980s, mainly due to interactions with Hawking himself. In the subsequent decade they were among a very small number of theorists who worked seriously on the problem. I myself became interested in the topic after hearing a talk by John Preskill at Caltech around 1992:
Do Black Holes Destroy Information? 
https://arxiv.org/abs/hep-th/9209058 
John Preskill 
I review the information loss paradox that was first formulated by Hawking, and discuss possible ways of resolving it. All proposed solutions have serious drawbacks. I conclude that the information loss paradox may well presage a revolution in fundamental physics. 

Hawking’s arguments were based on the specific properties of black hole radiation (so-called Hawking radiation) that he himself had deduced. His calculations assumed a semiclassical spacetime background -- they did not treat spacetime itself in a quantum mechanical way, because this would require a theory of quantum gravity. 

Hawking’s formulation has been refined over several decades. 

Hawking (~1976): BH radiation, calculated in a semiclassical spacetime background, is thermal and is in a mixed state. It therefore cannot encode the pure state quantum information behind the horizon. 

No Cloning (~1990): There exist spacelike surfaces which intersect both the interior of the BH and the emitted Hawking radiation. The No Cloning theorem implies that the quantum state of the interior cannot be reproduced in the outgoing radiation. 

Entanglement Monogamy (~2010): Hawking modes are highly entangled with interior modes near the horizon, and therefore cannot purify the (late time) radiation state of an old black hole. 

However, reliance on a semiclassical spacetime background undermines all of these formulations of the BH information paradox, as I explain below. That is, there is in fact no satisfactory argument for the paradox

An argument for the information paradox must show that a BH evaporates into a mixed final state, even if the initial state was pure. However, the Hilbert space of the final states is extremely large: its dimensionality grows as the exponential of the BH surface area in Planck units. Furthermore the final state is a superposition of many possible quantum spacetimes and corresponding radiation states: it is described by a wavefunction of the form  ψ[g,M]  where g describes the spacetime geometry and M the radiation/matter fields.

It is easy to understand why the Hilbert space of [g,M] contains many possible spacetime geometries. The entire BH rest mass is eventually converted into radiation by the evaporation process. Fluctuations in the momenta of these radiation quanta can easily give the BH a center of mass velocity which varies over the long evaporation time. The final spread in location of the BH is of order the initial mass squared, so much larger than its Schwarzschild radius. Each radiation pattern corresponds to a complex recoil trajectory of the BH itself, and the resulting gravitational fields are macroscopically distinct spacetimes.

Restriction to a specific semiclassical background metric is a restriction to a very small subset X of the final state Hilbert space Y. Concentration of measure results show that for almost all pure states in a large Hilbert space Y, the density matrix 

 ρ(X) =  tr  ψ*ψ 

describing (small) region X will be exponentially close to thermal -- i.e., like the radiation found in Hawking's original calculation.

Analysis restricted to a specific spacetime background is only sensitive to the subset X of Hilbert space consistent with that semiclassical description. The analysis only probes the mixed state ρ(X) and not the (possibly) pure state which lives in the large Hilbert space Y. Thus even if the BH evaporation is entirely unitary, resulting in a pure final state ψ[g,M] in Y, it might appear to violate unitarity because the analysis is restricted to X and hence investigates the mixed state ρ(X). Entanglement between different X and X' -- equivalently, between different branches of the wavefunction ψ[g,M] -- has been neglected, although even exponentially small correlations between these branches may be sufficient to unitarize the result.


These and related issues are discussed in 

1. arXiv:0903.2258 Measurements meant to test BH unitarity must have sensitivity to detect multiple Everett branches 


BH evaporation leads to macroscopic superposition states; why this invalidates No Cloning and Entanglement Monogamy constructions, etc. Unitary evaporation does not imply unitarity on each semiclassical spacetime background.


3. arXiv:2011.11661 von Neumann Quantum Ergodic Theorem implies almost all systems evolve into macroscopic superposition states. Talk + slides.

When Hawking's paradox first received wide attention it was understood that the approximation of fixed spacetime background would receive quantum gravitational corrections, but it was assumed that these were small for most of the evaporation of a large BH. What was not appreciated (until the last decade or so) is that if spacetime geometry is treated quantum mechanically the Hilbert space within which the analysis must take place becomes much much larger and entanglement between X and X' supspaces which represent distinct geometries must be considered. In the "quantum hair" results described at bottom, it can be seen very explicitly that the evaporation process leads to entanglement between the radiation state, the background geometry, and the internal state of the hole. Within the large Hilbert space Y, exponentially small correlations (deviations from Hawking's original semiclassical approximation) can, at least in principle, unitarize BH evaporation.

In summary, my opinion for the past decade or so has been: theoretical arguments claiming to demonstrate that black holes cause pure states to evolve into mixed states have major flaws. 


This recent review article gives an excellent overview of the current situation: 
Lessons from the Information Paradox 
https://arxiv.org/abs/2012.05770 
Suvrat Raju 
Abstract: We review recent progress on the information paradox. We explain why exponentially small correlations in the radiation emitted by a black hole are sufficient to resolve the original paradox put forward by Hawking. We then describe a refinement of the paradox that makes essential reference to the black-hole interior. This analysis leads to a broadly-applicable physical principle: in a theory of quantum gravity, a copy of all the information on a Cauchy slice is also available near the boundary of the slice. This principle can be made precise and established — under weak assumptions, and using only low-energy techniques — in asymptotically global AdS and in four dimensional asymptotically flat spacetime. When applied to black holes, this principle tells us that the exterior of the black hole always retains a complete copy of the information in the interior. We show that accounting for this redundancy provides a resolution of the information paradox for evaporating black holes ...

Raju and collaborators have made important contributions demonstrating that in quantum gravity information is never localized -- the information on a specific Cauchy slice is recoverable in the asymptotic region near the boundary. [1] [2] [3]

However, despite the growing perception that the information paradox might be resolved, the mechanism by which quantum information inside the horizon is encoded in the outgoing Hawking radiation has yet to be understood. 

In a recent paper, my collaborators and I showed that the quantum state of the graviton field outside the horizon depends on the state of the interior. No-hair theorems in general relativity severely limit the information that can be encoded in the classical gravitational field of a black hole, but we show that this does not hold at the quantum level. 

Our result is directly connected to Raju et al.'s demonstration that the interior information is recoverable at the boundary: both originate, roughly speaking, from the Gauss Law constraint in quantization of gravity. It provides a mechanism ("quantum hair") by which the quantum information inside the hole can be encoded in ψ[g,M]. 

The discussion below suggests that each internal BH state described by the coefficients { c_n } results in a different final radiation state -- i.e., the process can be unitary.





Note Added

In the comments David asks about the results described in this 2020 Quanta article The Most Famous Paradox in Physics Nears Its End

I thought about discussing those results in the post, but 1. it was already long, and 2. they are using a very different AdS approach. 

However, Raju does discuss these papers in his review. 

Most of the theorists in the Quanta article accept the basic formulation of the information paradox, so it's surprising to them that they see indications of unitary black hole evaporation. As I mentioned in the post I don't think the paradox itself is well-established, so I am not surprised. 

I think that the quantum hair results are important because they show explicitly that the internal state of the hole affects the quantum state of the graviton field, which then influences the Hawking radiation production. 

It was pointed out by Papadodimos and Raju, and also in my 2013 paper arXiv:1308.5686, that tiny correlations in the radiation density matrix could purify it. That is, the Hawking density matrix plus exp(-S) corrections (which everyone expects are there) could result from a pure state in the large Hilbert space Y, which has dimensionality ~ exp(+S). This is related to what I wrote in the post: start with a pure state in Y and trace over the complement of X. The resulting ρ(X) is exponentially close to thermal (maximally mixed) even though it came from a pure state.

Tuesday, September 07, 2021

Kathryn Paige Harden Profile in The New Yorker (Behavior Genetics)

This is a good profile of behavior geneticist Paige Harden (UT Austin professor of psychology, former student of Eric Turkheimer), with a balanced discussion of polygenic prediction of cognitive traits and the culture war context in which it (unfortunately) exists.
Can Progressives Be Convinced That Genetics Matters? 
The behavior geneticist Kathryn Paige Harden is waging a two-front campaign: on her left are those who assume that genes are irrelevant, on her right those who insist that they’re everything. 
Gideon Lewis-Kraus
Gideon Lewis-Kraus is a talented writer who also wrote a very nice article on the NYTimes / Slate Star Codex hysteria last summer.

Some references related to the New Yorker profile:
1. The paper Harden was attacked for sharing while a visiting scholar at the Russell Sage Foundation: Game Over: Genomic Prediction of Social Mobility 

2. Harden's paper on polygenic scores and mathematics progression in high school: Genomic prediction of student flow through high school math curriculum 

3. Vox article; Turkheimer and Harden drawn into debate including Charles Murray and Sam Harris: Scientific Consensus on Cognitive Ability?

A recent talk by Harden, based on her forthcoming book The Genetic Lottery: Why DNA Matters for Social Equality



Regarding polygenic prediction of complex traits 

I first met Eric Turkheimer in person (we had corresponded online prior to that) at the Behavior Genetics Association annual meeting in 2012, which was back to back with the International Conference on Quantitative Genetics, both held in Edinburgh that year (photos and slides [1] [2] [3]). I was completely new to the field but they allowed me to give a keynote presentation (if memory serves, together with Peter Visscher). Harden may have been at the meeting but I don't recall whether we met. 

At the time, people were still doing underpowered candidate gene studies (there were many talks on this at BGA although fewer at ICQG) and struggling to understand GCTA (Visscher group's work showing one can estimate heritability from modestly large GWAS datasets, results consistent with earlier twins and adoption work). Consequently a theoretical physicist talking about genomic prediction using AI/ML and a million genomes seemed like an alien time traveler from the future. Indeed, I was.

My talk is largely summarized here:
On the genetic architecture of intelligence and other quantitative traits 
https://arxiv.org/abs/1408.3421 
How do genes affect cognitive ability or other human quantitative traits such as height or disease risk? Progress on this challenging question is likely to be significant in the near future. I begin with a brief review of psychometric measurements of intelligence, introducing the idea of a "general factor" or g score. The main results concern the stability, validity (predictive power), and heritability of adult g. The largest component of genetic variance for both height and intelligence is additive (linear), leading to important simplifications in predictive modeling and statistical estimation. Due mainly to the rapidly decreasing cost of genotyping, it is possible that within the coming decade researchers will identify loci which account for a significant fraction of total g variation. In the case of height analogous efforts are well under way. I describe some unpublished results concerning the genetic architecture of height and cognitive ability, which suggest that roughly 10k moderately rare causal variants of mostly negative effect are responsible for normal population variation. Using results from Compressed Sensing (L1-penalized regression), I estimate the statistical power required to characterize both linear and nonlinear models for quantitative traits. The main unknown parameter s (sparsity) is the number of loci which account for the bulk of the genetic variation. The required sample size is of order 100s, or roughly a million in the case of cognitive ability.
The predictions in my 2012 BGA talk and in the 2014 review article above have mostly been validated. Research advances often pass through the following phases of reaction from the scientific community:
1. It's wrong ("genes don't affect intelligence! anyway too complex to figure out... we hope")
2. It's trivial ("ofc with lots of data you can do anything... knew it all along")
3. I did it first ("please cite my important paper on this")
Or, as sometimes attributed to Gandhi: "First they ignore you, then they laugh at you, then they fight you, then you win.”



Technical note

In 2014 I estimated that ~1 million genotype | phenotype pairs would be enough to capture most of the common SNP heritability for height and cognitive ability. This was accomplished for height in 2017. However, the sample size of well-phenotyped individuals is much smaller for cognitive ability, even in 2021, than for height in 2017. For example, in UK Biobank the cognitive test is very brief (~5 minutes IIRC, a dozen or so questions), but it has not even been administered to the full cohort as yet. In the Educational Attainment studies the phenotype EA is only moderately correlated (~0.3 ?) or so with actual cognitive ability.

Hence, although the most recent EA4 results use 3 million individuals [1], and produce a predictor which correlates ~0.4 with actual EA, the statistical power available is still less than what I predicted would be required to train a really good cognitive ability predictor.

In our 2017 height paper, which also briefly discussed bone density and cognitive ability prediction, we built a cognitve ability predictor roughly as powerful as EA3 using only ~100k individuals with the noisy UKB test data. So I remain confident that  ~million individuals with good cognitive scores (e.g., SAT, AFQT, full IQ test) would deliver results far beyond what we currently have available. We also found that our predictor, built using actual (albeit noisy) cognitive scores exhibits less power reduction in within-family (sibling) analyses compared to EA. So there is evidence that (no surprise) EA is more influenced by environmental factors, including so-called genetic nurture effects, than is cognitive ability.

A predictor which captures most of the common SNP heritability for cognitive ability might correlate ~0.5 or 0.6 with actual ability. Applications of this predictor in, e.g., studies of social mobility or educational success or even longevity using existing datasets would be extremely dramatic.

Monday, July 19, 2021

The History of the Planck Length and the Madness of Crowds

I had forgotten about the 2005-06 email correspondence reproduced below, but my collaborator Xavier Calmet reminded me of it today and I was able to find these messages.

The idea of a minimal length of order the Planck length, arising due to quantum gravity (i.e., quantum fluctuations in the structure of spacetime), is now widely accepted by theoretical physicists. But as Professor Mead (University of Minnesota, now retired) elaborates, based on his own experience, it was considered preposterous for a long time. 

Large groups of people can be wrong for long periods of time -- in financial markets, academia, even theoretical physics. 

Our paper, referred to by Mead, is 

Minimum Length from Quantum Mechanics and Classical General Relativity 

X. Calmet, M. Graesser, and S. Hsu  

https://arxiv.org/abs/hep-th/0405033  

Phys Rev Letters Vol. 93, 21101 (2004)

The related idea, first formulated by R. Buniy, A. Zee, and myself, that the structure of Hilbert Space itself is likely discrete (or "granular") at some fundamental level, is currently considered preposterous, but time will tell. 

More here

At bottom I include a relevant excerpt from correspondence with Freeman Dyson in 2005.


Dear Drs. Calmet, Graesser, Hsu,

I read with interest your article in Phys Rev Letters Vol. 93, 21101 (2004), and was pleasantly surprised to see my 1964 paper cited (second citation of your ref. 1).  Not many people have cited this paper, and I think it was pretty much forgotten the day it was published, & has remained so ever since.  To me, your paper shows again that, no matter how one looks at it, one runs into problems trying to measure a distance (or synchronize clocks) with greater accuracy than the Planck length (or time).

I feel rather gratified that the physics community, which back then considered the idea of the Planck length as a fundamental limitation to be quite preposterous, has since come around to (more or less) my opinion.  Obviously, I deserve ZERO credit for this, since I'm sure that the people who finally reached this conclusion, whoever they were, were unaware of my work.  To me, this is better than if they had been influenced by me, since it's good to know that the principles of physics lead to this conclusion, rather than the influence of an individual.  I hope that makes sense. ...

You might be amused by one story about how I finally got the (first) paper published after 5 years of referee problems.  A whole series of referees had claimed that my eq. (1), which is related to your eq. (1), could not be true.  I suspect that they just didn't want to read any further.  Nothing I could say would convince them, though I'm sure you would agree that the result is transparently obvious.  So I submitted another paper which consisted of nothing but a lengthy detailed proof of eq. (1), without mentioning the connection with the gravitation paper.  The referees of THAT paper rejected it on the grounds that the result was trivially obvious!!  When I pointed out this discrepancy to the editors, I got the gravitation paper reconsidered and eventually published.

But back then no one considered the Planck length to be a candidate as a fundamental limitation.  Well, almost no one.  I did receive support from Henry Primakoff, David Bohm, and Roger Penrose.  As far as I can recall, these were the only theoretical physicists of note who were willing to take this idea seriously (and I talked to many, in addition to reading the reports of all the referees).

Well anyway, I greet you, thank you for your paper and for the citation, and hope you haven't found this e-mail too boring.

Yours Sincerely,

C.  Alden  Mead


Dear Dr. Mead,

Thank you very much for your email message. It is fascinating to learn the history behind your work. We found your paper to be clearly written and useful.

Amusingly, we state at the beginning of our paper something like "it is widely believed..." that there is a fundamental Planck-length limit. I am sure your paper made a contribution to this change in attitude. The paper is not obscure as we were able to find it without much digging.

Your story about the vicissitudes of publishing rings true to me. I find such stories reassuring given the annoying obstacles we all face in trying to make our little contributions to science.

Finally, we intend to have a look at your second paper. Perhaps we will find another interesting application of your ideas.

Warm regards,

Stephen Hsu

Xavier Calmet

Michael Graesser

 

Dear Steve,

Many thanks for your kind reply.  I find the information quite interesting, though as you say it leaves some historical questions unanswered.  I think that Planck himself arrived at his length by purely dimensional considerations, and he supposedly considered this very important.

As you point out, it's physically very reasonable, perhaps more so in view of more recent developments.  It seemed physically reasonable to me back in 1959, but not to most of the mainstream theorists of the time.

I think that physical considerations (such as yours and mine) and mathematical ones should support and complement each other.  The Heisenberg-Bohr thought experiments tell us what a correct mathematical formalism should provide, and the formal quantum mechanics does this and, of course, much more.  Same with the principle of equivalence and general relativity.  Now, the physical ideas regarding the Planck length & time may serve as a guide in constructing a satisfactory formalism.  Perhaps string theory will prove to be the answer, but I must admit that I'm ignorant of all details of that theory.

Anyway, I'm delighted to correspond with all of you as much as you wish, but I emphasize that I don't want to be intrusive or become a nuisance.

As my wife has written you (her idea, not mine), your e-mail was a nice birthday present.

Kindest Regards, Alden


See also this letter from Mead which appeared in Physics Today.  


The following is from Freeman Dyson:
 ... to me the most interesting is the discrete Hilbert Space paper, especially your reference [2] proving that lengths cannot be measured with error smaller than the Planck length. I was unaware of this reference but I had reached the same conclusion independently.

 

Sunday, May 02, 2021

40 Years of Quantum Computation and Quantum Information


This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders. 
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role. 
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War. 
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time. 
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ... 
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles. 
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves. 
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics. 
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ] 

Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...

Feynman's 1981 lecture Simulating Physics With Computers.

Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.



Related: 


My paper on the Margolus-Levitin Theorem in light of gravity: 

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle. 

Participants in the 1981 meeting:
 

Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)

Wednesday, March 10, 2021

Academic Freedom Alliance

We live in an era of preference falsification. 

Vocal, dishonest, irrational activists have cowed all but the most courageous of the few remaining serious thinkers, even at our greatest universities. 

I hope the creation of the Academic Freedom Alliance will provide a much needed corrective to the dishonest reign of terror in place today.
Chronicle: When I spoke to the Princeton University legal scholar and political philosopher Robert P. George in August, he offered a vivid zoological metaphor to describe what happens when outrage mobs attack academics. When hunted by lions, herds of zebras “fly off in a million directions, and the targeted member is easily taken down and destroyed and eaten.” A herd of elephants, by contrast, will “circle around the vulnerable elephant.” 
... What had begun as a group of 20 Princeton professors organized to defend academic freedom at one college was rapidly scaling up its ambitions and capacity: It would become a nationwide organization. George had already hired an executive director and secured millions in funding. 
... Today, that organization, the Academic Freedom Alliance, formally issued a manifesto declaring that “an attack on academic freedom anywhere is an attack on academic freedom everywhere,” and committing its nearly 200 members to providing aid and support in defense of “freedom of thought and expression in their work as researchers and writers or in their lives as citizens,” “freedom to design courses and conduct classes using reasonable pedagogical judgment,” and “freedom from ideological tests, affirmations, and oaths.” 
... All members of the alliance have an automatic right for requests for legal aid to be considered, but the organization is also open to considering the cases of faculty nonmembers, university staff, or even students on a case-by-case basis. The alliance’s legal-advisory committee includes well-known lawyers such as Floyd Abrams and the prolific U.S. Supreme Court litigator Lisa S. Blatt. 
When I spoke to him in February, as the date of AFA’s public announcement drew closer, George expressed surprise and satisfaction at the success the organization had found in signing up liberals and progressives. “If anything we’ve gone too far — we’re imbalanced over to the left side of the agenda,” he noted wryly. “That’s because our yield was a little higher than we expected it to be when we got in touch with folks.” 
The yield was higher, as George would learn, quoting one such progressive member, because progressives in academe often feel themselves to be even more closely monitored for ideological orthodoxy by students and activist colleagues than their conservative peers. “‘You conservative guys, people like you and Adrian Vermeule, you think you’re vulnerable. You’re not nearly as vulnerable as we liberals are,’” George quoted this member as saying. “They are absolutely terrified, and they know they can never keep up with the wokeness. What’s OK today is over the line tomorrow, and nobody gave you the memo.” 
George went on to note that some of the progressives he spoke with were indeed too frightened of the very censorious atmosphere that the alliance proposes to challenge to be willing to affiliate with it, at least at the outset. 
... Nadine Strossen, a New York Law School law professor and former president of the ACLU, emphasized the problem of self-censorship that she saw the alliance as counteracting. “When somebody is attacked by a university official or, for lack of a better term, a Twitter mob, there are constant reports from all individuals targeted that they receive so many private communications and emails saying ‘I support you and agree with you, but I just can’t say it publicly.’” 
She hopes that the combined reputations of the organization’s members will provide a permission structure allowing other faculty members to stand up for their private convictions in public. While a lawsuit can vindicate someone’s constitutional or contractual rights, Strossen noted, only a change in the cultural atmosphere around these issues — a preference for open debate and free exchange over stigmatization and punishment as the default way to negotiate controversy in academe — could resolve the overall problem. 
The Princeton University political historian Keith E. Whittington, who is chairman of the alliance’s academic committee, echoed Strossen’s point. The recruitment effort, he said, aimed to gather “people who would be respectable and hopefully influential to college administrators — such that if a group like that came to them and said ‘Look, you’re behaving badly here on these academic-freedom principles,’ this is a group that they might pay attention to.” 
“Administrators feel very buffeted by political pressures, often only from one side,” Whittington told me. “They hear from all the people who are demanding action, and the easiest, lowest-cost thing to do in those circumstances is to go with the flow and throw the prof under the bus. So we do hope that we can help balance that equation a little bit, make it a little more costly for administrators.” ...
Perhaps amusingly, I am one of the progressive founding members of AFA. At least, I have for most of my life been politically to the left of Robby George and many of the original Princeton 20 that started the project. 

When I left the position of Senior Vice-President for Research and Innovation at MSU last summer, I wrote
6. Many professors and non-academics who supported me were afraid to sign our petition -- they did not want to be subject to mob attack. We received many communications expressing this sentiment. 
7. The victory of the twitter mob will likely have a chilling effect on academic freedom on campus.

For another vivid example of the atmosphere on US university campuses, see Struggles at Yale.  

Obama on political correctness:
... I’ve heard some college campuses where they don’t want to have a guest speaker who is too conservative or they don’t want to read a book if it has language that is offensive to African-Americans or somehow sends a demeaning signal towards women. I gotta tell you, I don’t agree with that either. I don’t agree that you, when you become students at colleges, have to be coddled and protected from different points of view. I think you should be able to — anybody who comes to speak to you and you disagree with, you should have an argument with ‘em. But you shouldn’t silence them by saying, "You can’t come because I'm too sensitive to hear what you have to say." That’s not the way we learn ...


Wednesday, February 03, 2021

Gerald Feinberg and The Prometheus Project


Gerald Feinberg (1933-1992) was a theoretical physicist at Columbia, perhaps best known for positing the tachyon -- a particle that travels faster than light. He also predicted the existence of the mu neutrino. 

Feinberg attended Bronx Science with Glashow and Weinberg. Interesting stories abound concerning how the three young theorists were regarded by their seniors at the start of their careers. 

I became aware of Feinberg when Pierre Sikivie and I worked out the long range force resulting from two neutrino exchange. Although we came to the idea independently and derived, for the first time, the correct result, we learned later that it had been studied before by Feinberg and Sucher. Sadly, Feinberg died of cancer shortly before Pierre and I wrote our paper. 

Recently I came across Feinberg's 1969 book The Prometheus Project, which is one of the first serious examinations (outside of science fiction) of world-changing technologies such as genetic engineering and AI. See reviews in SciencePhysics Today, and H+ Magazine. A scanned copy of the book can be found at Libgen.

Feinberg had the courage to engage with ideas that were much more speculative in the late 60s than they are today. He foresaw correctly, I believe, that technologies like AI and genetic engineering will alter not just human society but the nature of the human species itself. In the final chapter, he outlines a proposal for the eponymous Prometheus Project -- a global democratic process by which the human species can set long term goals in order to guide our approach to what today would be called the Singularity.

   









Saturday, September 12, 2020

Orwell: 1944, 1984, and Today

George Orwell 1944 Letter foreshadows 1984, and today:
... Already history has in a sense ceased to exist, i.e. there is no such thing as a history of our own times which could be universally accepted, and the exact sciences are endangered as soon as military necessity ceases to keep people up to the mark. Hitler can say that the Jews started the war, and if he survives that will become official history. He can’t say that two and two are five, because for the purposes of, say, ballistics they have to make four. But if the sort of world that I am afraid of arrives, a world of two or three great superstates which are unable to conquer one another, two and two could become five if the fuhrer wished it. That, so far as I can see, is the direction in which we are actually moving ... 
... intellectuals are more totalitarian in outlook than the common people. On the whole the English intelligentsia have opposed Hitler, but only at the price of accepting Stalin. Most of them are perfectly ready for dictatorial methods, secret police, systematic falsification of history etc. so long as they feel that it is on ‘our’ side.
I am sure any reader can provide examples of the following from the "news" or academia or even from a national lab:
there is no such thing as a history of our own times which could be universally accepted  
the exact sciences are endangered  
two and two could become five
dictatorial methods ... systematic falsification of history etc. so long as they feel that it is on ‘our’ side.

Of course, there is nothing new under the sun. It takes only a generation for costly lessons to be entirely forgotten...


Wikipedia: Trofim Denisovich Lysenko ...Soviet agronomist and biologist. Lysenko was a strong proponent of soft inheritance and rejected Mendelian genetics in favor of pseudoscientific ideas termed Lysenkoism.[1][2] In 1940, Lysenko became director of the Institute of Genetics within the USSR's Academy of Sciences, and he used his political influence and power to suppress dissenting opinions and discredit, marginalize, and imprison his critics, elevating his anti-Mendelian theories to state-sanctioned doctrine. 
Soviet scientists who refused to renounce genetics were dismissed from their posts and left destitute. Hundreds if not thousands of others were imprisoned. Several were sentenced to death as enemies of the state, including the botanist Nikolai Vavilov. Scientific dissent from Lysenko's theories of environmentally acquired inheritance was formally outlawed in the Soviet Union in 1948. As a result of Lysenkoism and forced collectivization, 15-30 million Soviet and Chinese citizens starved to death in the Holodomor and the Great Chinese Famine. ...

 

In 1964, physicist Andrei Sakharov spoke out against Lysenko in the General Assembly of the Academy of Sciences of the USSR: "He is responsible for the shameful backwardness of Soviet biology and of genetics in particular, for the dissemination of pseudo-scientific views, for adventurism, for the degradation of learning, and for the defamation, firing, arrest, even death, of many genuine scientists."

Thursday, June 04, 2020

Leif Wenar on the Resource Curse and Impact Philosophy -- Manifold Episode #49



Corey and Steve interview Leif Wenar, Professor of Philosophy at Stanford University and author of Blood Oil. They begin with memories of Leif and Corey’s mutual friend David Foster Wallace and end with a discussion of John Rawls and Robert Nozick (Wenar's thesis advisor at Harvard, and a friend of Steve's). Corey asks whether Leif shares his view that analytic philosophy had become too divorced from wider intellectual life. Leif explains his effort to re-engage philosophy in the big issues of our day as Hobbes, Rousseau, Locke, Mill and Marx were in theirs. He details how a trip to Nigeria gave him insight into the real problems facing real people in oil-rich countries. Leif explains how the legal concept of “efficiency” led to the resource curse and argues that we should refuse to buy oil from countries that are not minimally accountable to their people. Steve notes that some may find this approach too idealistic and not in the US interest. Leif suggests that what philosophers can contribute is the ability to see the big synthetic picture in a complex world.

Transcript

Leif Wenar (Bio)

Blood Oil: Tyrants, Violence, and the Rules That Run the World

John Rawls - Stanford Encyclopedia of Philosophy

Robert Nozick - Stanford Encyclopedia of Philosophy


man·i·fold /ˈmanəˌfōld/ many and various.

In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point.

Steve Hsu and Corey Washington have been friends for almost 30 years, and between them hold PhDs in Neuroscience, Philosophy, and Theoretical Physics. Join them for wide ranging and unfiltered conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Steve Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech. Educated at Caltech and Berkeley, he was a Harvard Junior Fellow and held faculty positions at Yale and the University of Oregon before joining MSU.

Corey Washington is Director of Analytics in the Office of Research and Innovation at Michigan State University. He was educated at Amherst College and MIT before receiving a PhD in Philosophy from Stanford and a PhD in a Neuroscience from Columbia. He held faculty positions at the University Washington and the University of Maryland. Prior to MSU, Corey worked as a biotech consultant and is founder of a medical diagnostics startup.

Saturday, May 09, 2020

Pure State Quantum Thermalization: from von Neumann to the Lab


Perhaps the most fundamental question in thermodynamics and statistical mechanics is: Why do systems tend to evolve toward thermal equilibrium? Equivalently, why does entropy tend to increase? Because Nature is quantum mechanical, a satisfactory answer to this question has to arise within quantum mechanics itself. The answer was given already in a 1929 paper by von Neumann. However, the ideas were not absorbed (were in fact misunderstood) by the physics community and only rediscovered in the 21st century! General awareness of these results is still rather limited.

See this 2011 post: Classics on the arxiv: von Neumann and the foundations of quantum statistical mechanics.

In modern language, we would say something to the effect that "typical" quantum pure states are highly entangled, and the density matrix describing any small sub-system (obtained by tracing over the rest of the pure state) is very close to micro-canonical (i.e., thermal). Under dynamical (Schrodinger) evolution, all systems (even those that are initially far from typical) spend nearly all of their time in a typical state (modulo some weak conditions on the Hamiltonian). Typicality of states is related to concentration of measure in high dimensional Hilbert spaces. One could even claim that the origin of thermodynamics lies in the geometry of Hilbert space itself.

[ It's worth noting that vN's paper does more than just demonstrate these results. It also gives an explicit construction of macroscopic classical (commuting) observables arising in a large Hilbert space. This construction would be a nice thing to include in textbooks for students trying to connect the classical and quantum worlds. ]

Recently I came across an experimental realization of these theoretical results, using cold atoms in an optical lattice (Greiner lab at Harvard):
Quantum thermalization through entanglement in an isolated many-body system

Science 353, 794-800 (2016)    arXiv:1603.04409v3

The concept of entropy is fundamental to thermalization, yet appears at odds with basic principles in quantum mechanics. Statistical mechanics relies on the maximization of entropy for a system at thermal equilibrium. However, an isolated many-body system initialized in a pure state will remain pure during Schrodinger evolution, and in this sense has static, zero entropy. The underlying role of quantum mechanics in many-body physics is then seemingly antithetical to the success of statistical mechanics in a large variety of systems. Here we experimentally study the emergence of statistical mechanics in a quantum state, and observe the fundamental role of quantum entanglement in facilitating this emergence. We perform microscopy on an evolving quantum system, and we see thermalization occur on a local scale, while we measure that the full quantum state remains pure. We directly measure entanglement entropy and observe how it assumes the role of the thermal entropy in thermalization. Although the full state remains measurably pure, entanglement creates local entropy that validates the use of statistical physics for local observables. In combination with number-resolved, single-site imaging, we demonstrate how our measurements of a pure quantum state agree with the Eigenstate Thermalization Hypothesis and thermal ensembles in the presence of a near-volume law in the entanglement entropy.
Note, given the original vN results I think the Eigenstate Thermalization Hypothesis is only of limited interest. [ But see comments for more discussion... ] The point is that this is a laboratory demonstration of pure state thermalization, anticipated in 1929 by vN.

Another aspect of quantum thermalization that is still not very well appreciated is that approach to equilibrium can have a very different character than what students are taught in statistical mechanics. The physical picture behind the Boltzmann equation is semi-classical: collisions between atoms happen in serial as two gases equilibrate. But Schrodinger evolution of the pure state (all the degrees of freedom together) toward typicality can take advantage of quantum parallelism: all possible collisions take place on different parts of the quantum superposition state. Consequently, the timescale for quantum thermalization can be much shorter than in the semi-classical Boltzmann description.

In 2015 my postdoc C.M. Ho (now director of an AI lab in Silicon Valley) and I pointed out that quantum thermalization was likely already realized in heavy ion collisions at RHIC and CERN, and that the quantum nature of the process was responsible for the surprisingly short time required to approach equilibrium (equivalently, to generate large amounts of entanglement entropy).

Entanglement and fast thermalization in heavy ion collisions (see also slides here).


Entanglement and Fast Quantum Thermalization in Heavy Ion Collisions (arXiv:1506.03696)

Chiu Man Ho, Stephen D. H. Hsu

Let A be subsystem of a larger system A∪B, and ψ be a typical state from the subspace of the Hilbert space H_AB satisfying an energy constraint. Then ρ_A(ψ)=Tr_B |ψ⟩⟨ψ| is nearly thermal. We discuss how this observation is related to fast thermalization of the central region (≈A) in heavy ion collisions, where B represents other degrees of freedom (soft modes, hard jets, co-linear particles) outside of A. Entanglement between the modes in A and B plays a central role; the entanglement entropy S_A increases rapidly in the collision. In gauge-gravity duality, S_A is related to the area of extremal surfaces in the bulk, which can be studied using gravitational duals.



An earlier blog post Ulam on physical intuition and visualization mentioned the difference between intuition for familiar semiclassical (incoherent) particle phenomena, versus for intrinsically quantum mechanical (coherent) phenomena such as the spread of entanglement and its relation to thermalization.
[Ulam:] ... Most of the physics at Los Alamos could be reduced to the study of assemblies of particles interacting with each other, hitting each other, scattering, sometimes giving rise to new particles. Strangely enough, the actual working problems did not involve much of the mathematical apparatus of quantum theory although it lay at the base of the phenomena, but rather dynamics of a more classical kind—kinematics, statistical mechanics, large-scale motion problems, hydrodynamics, behavior of radiation, and the like. In fact, compared to quantum theory the project work was like applied mathematics as compared with abstract mathematics. If one is good at solving differential equations or using asymptotic series, one need not necessarily know the foundations of function space language. It is needed for a more fundamental understanding, of course. In the same way, quantum theory is necessary in many instances to explain the data and to explain the values of cross sections. But it was not crucial, once one understood the ideas and then the facts of events involving neutrons reacting with other nuclei.
This "dynamics of a more classical kind" did not require intuition for entanglement or high dimensional Hilbert spaces. But see von Neumann and the foundations of quantum statistical mechanics for examples of the latter.

Blog Archive

Labels