We give an elementary account of quantum measurement and related topics from the modern perspective of decoherence. The discussion should be comprehensible to students who have completed a basic course in quantum mechanics with exposure to concepts such as Hilbert space, density matrices, and von Neumann projection (``wavefunction collapse'').
Peter Byrne is an investigative reporter and science writer based in Northern California. His popular biography, The Many Worlds of Hugh Everett III - Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family (Oxford University Press, 2010) was followed by publication of The Everett Interpretation of Quantum Mechanics, Collected Works 1957-1980, (Princeton University Press, 2012), co-edited with philosopher of science Jeffrey A. Barrett of UC Irvine.
Everett's formulation of quantum mechanics, which implies the existence of a quantum multiverse, is favored by a significant (and growing) fraction of working physicists.
Steve and Peter discuss:
0:00 How Peter Byrne came to write a biography of Hugh Everett
18:09 Everett’s personal life and groundbreaking thesis as a catalyst for the book
24:00 Everett and Decoherence
31:25 Reaction of other physicists to Everett’s many worlds theory
40:46 Steve’s take on Everett’s many worlds theory
43:41 Peter on the bifurcation of science and philosophy
Vlatko Vedral is Professor in the Department of Physics at the University of Oxford and Centre for Quantum Technologies (CQT) at the National University of Singapore. He is known for his research on the theory of Entanglement and Quantum Information Theory.
Steve and Vlatko discuss:
1. History of quantum information theory, entanglement, and quantum computing
2. Recent lab experiments that create superposition states of macroscopic objects, including a living creature (tardigrade)
3. Whether quantum mechanics implies the existence of many worlds: are you in a superposition state right now?
I have waited for this development since 2009 (see old post below).
The fact that a macroscopic, living organism can be placed in a superposition state may come as a shock to many people, including a number of physicists.
If a tardigrade can exist in a superposition state, why can't you?
Are you in a superposition state right now?
Is there some special class of objects that "collapse wavefunctions"? (Copenhagen) ... It's ridiculous, absurd. In any case we now know that tardigrades are not in that class.
Quantum and biological systems are seldom discussed together as they seemingly demand opposing conditions. Life is complex, "hot and wet" whereas quantum objects are small, cold and well controlled. Here, we overcome this barrier with a tardigrade -- a microscopic multicellular organism known to tolerate extreme physiochemical conditions via a latent state of life known as cryptobiosis. We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of 6×10−6 mbar, setting a new record for the conditions that a complex form of life can survive.
From the paper:
In our experiments, we use specimens of a Danish population of Ramazzottius varieornatus Bertolani and Kinchin, 1993 (Eutardigrada, Ramazzottiidae). The species belongs to phylum Tardigrada comprising of microscopic invertebrate animals with an adult length of 50-1200 µm [12]. Importantly, many tardigrades show extraordinary survival capabilities [13] and selected species have previously been exposed to extremely low temperatures of 50 mK [14] and low Earth orbit pressures of 10−19 mbar [15]. Their survival in these extreme conditions is possible thanks to a latent state of life known as cryptobiosis [2, 13]. Cryptobiosis can be induced by various extreme physicochemical conditions, including freezing and desiccation. Specifically, during desiccation, tardigrades reduce volume and contract into an ametabolic state, known as a “tun”. Revival is achieved by reintroducing the tardigrade into liquid water at atmospheric pressure. In the current experiments, we used dessicated R. varieornatus tuns with a length of 100-150 µm. Active adult specimens have a length of 200-450 µm. The revival process typically takes several minutes.
We place a tardigrade tun on a superconducting transmon qubit and observe coupling between the qubit and the tardigrade tun via a shift in the resonance frequency of the new qubit-tardigrade system. This joint qubit-tardigrade system is then entangled with a second superconducting qubit. We reconstruct the density matrix of this coupled system experimentally via quantum state tomography. Finally, the tardigrade is removed from the superconducting qubit and reintroduced to atmospheric pressure and room temperature. We observe the resumption of its active metabolic state in water.
****
Note Added: I wrote this post the day after getting a Covid booster and a shingles vaccine, so I was a little zonked out and was not able to look at the details at the time.
The authors claim that the B qubit states B0 and B1 are entangled with two different internal states of T (tardigrade): B0 T0 , B1 T1.
Then they further entangle B with the other qubit A to make more complex states.
In the supplement they analyze the density matrix for this d=8 Hilbert space, and claim to have measured quantities which imply tripartite entanglement. The results seem to depend on theoretical modeling -- I don't think they made any direct measurements on T.
They do not present any uncertainty analysis of the tripartite entanglement measure π.
The line in the main body of the paper that sounds convincing is We reconstruct the density matrix of this coupled system experimentally via quantum state tomography (see Fig 3), but the devil is in the details:
... a microscopic model where the charges inside the tardigrade
are represented as effective harmonic oscillators that couple to the electric field of the qubit via the dipole mechanism... [This theoretical analysis results in the B0 T0 , B1 T1 system where T0 T1 are effective qubits formed of tardigrade internal degrees of freedom.]
...
We applied 16 different combinations of one-qubit gates on qubit A and dressed states of the joint qubit B-tardigrade system. We then jointly readout the state of both qubits using the cavity ...
Some commentary online is very skeptical of their claims, see here for example.
More (12/23/2021): One of the co-authors is Vlatko Vedral, a well-known theorist who works in this area. His recent blog post Entangled Tardigrades is worth a look.
After thinking a bit more, the B0 T0 , B1 T1 description of the system seems plausible to me. So, although they don't make direct measurements on T (only on the combined B-T system), it does seem reasonable to assert that the tardigrade (or at least some collective degree of freedom related to internal charges of it) has been placed into a superposition state.
If the creature above (a tardigrade arthropod) can be placed in a superposition state, will you accept that you probably can be as well? And once you admit this, will you accept that you probably actually DO exist in a superposition state already?
It may be disturbing to learn that we live in a huge quantum multiverse, but was it not also disturbing for Galileo's contemporaries to learn that we live on a giant rotating sphere, hurtling through space at 30 kilometers per second? E pur si muove!
Of course there is no wavefunction collapse, only unitary evolution.
Many people are confused about this -- they have not recovered from what they were taught as beginning students. They still believe in the Tooth Fairy ;-)
Gork is a robot name made up by Sidney Coleman for his talk Quantum Mechanics, In Your Face! (video, Gork @40m or so). Before the word entanglement became fashionable, Sidney summarized this talk to me in his office as "Quantum Mechanics is just a theory of correlations, and we live in this tangle of correlations." He may not have said "tangle" -- I am not sure. But he was describing the Everett formulation, trying not to scare a young postdoc :-)
arXiv:2011.11661, to appear in Foundations of Physics
For any choice of initial state and weak assumptions about the Hamiltonian, large isolated quantum systems undergoing Schrodinger evolution spend most of their time in macroscopic superposition states. The result follows from von Neumann's 1929 Quantum Ergodic Theorem. As a specific example, we consider a box containing a solid ball and some gas molecules. Regardless of the initial state, the system will evolve into a quantum superposition of states with the ball in macroscopically different positions. Thus, despite their seeming fragility, macroscopic superposition states are ubiquitous consequences of quantum evolution. We discuss the connection to many worlds quantum mechanics.
... Quantum technology has made great strides over the past two decades and physicists are now able to construct and manipulate systems that were once in the realm of thought experiments. One particularly fascinating avenue of inquiry is the fuzzy border between quantum and classical physics. In the past, a clear delineation could be made in terms of size: tiny objects such as photons and electrons inhabit the quantum world whereas large objects such as billiard balls obey classical physics.
Over the past decade, physicists have been pushing the limits of what is quantum using drum-like mechanical resonators measuring around 10 microns across. Unlike electrons or photons, these drumheads are macroscopic objects that are manufactured using standard micromachining techniques and appear as solid as billiard balls in electron microscope images (see figure). Yet despite the resonators’ tangible nature, researchers have been able to observe their quantum properties, for example, by putting a device into its quantum ground state as Teufel and colleagues did in 2017.
This year, teams led by Teufel and Kotler and independently by Sillanpää went a step further, becoming the first to quantum-mechanically entangle two such drumheads. The two groups generated their entanglement in different ways. While the Aalto/Canberra team used a specially chosen resonant frequency to eliminate noise in the system that could have disturbed the entangled state, the NIST group’s entanglement resembled a two-qubit gate in which the form of the entangled state depends on the initial states of the drumheads. ...
arXiv:2011.11661, to appear in Foundations of Physics
For any choice of initial state and weak assumptions about the Hamiltonian, large isolated quantum systems undergoing Schrodinger evolution spend most of their time in macroscopic superposition states. The result follows from von Neumann's 1929 Quantum Ergodic Theorem. As a specific example, we consider a box containing a solid ball and some gas molecules. Regardless of the initial state, the system will evolve into a quantum superposition of states with the ball in macroscopically different positions. Thus, despite their seeming fragility, macroscopic superposition states are ubiquitous consequences of quantum evolution. We discuss the connection to many worlds quantum mechanics.
It may come as a surprise to many physicists that Schrodinger evolution in large isolated quantum systems leads generically to macroscopic superposition states. For example, in the familiar Brownian motion setup of a ball interacting with a gas of particles, after sufficient time the system evolves into a superposition state with the ball in macroscopically different locations. We use von Neumann's 1929 Quantum Ergodic Theorem as a tool to deduce this dynamical result.
The natural state of a complex quantum system is a superposition ("Schrodinger cat state"!), absent mysterious wavefunction collapse, which has yet to be fully defined either in logical terms or explicit dynamics. Indeed wavefunction collapse may not be necessary to explain the phenomenology of quantum mechanics. This is the underappreciated meaning of work on decoherence dating back to Zeh and Everett. See talk slides linked here, or the introduction of this paper.
We also derive some new (sharper) concentration of measure bounds that can be applied to small systems (e.g., fewer than 10 qubits).
For any choice of initial state and weak assumptions about the Hamiltonian, large isolated quantum systems undergoing Schrodinger evolution spend most of their time in macroscopic superposition states. The result follows from von Neumann's 1929 Quantum Ergodic Theorem. As a specific example, we consider a box containing a solid ball and some gas molecules. Regardless of the initial state, the system will evolve into a quantum superposition of states with the ball in macroscopically different positions. Thus, despite their seeming fragility, macroscopic superposition states are ubiquitous consequences of quantum evolution. We discuss the connection to many worlds quantum mechanics.
It may come as a surprise to many physicists that Schrodinger evolution in large isolated quantum systems leads generically to macroscopic superposition states. For example, in the familiar Brownian motion setup of a ball interacting with a gas of particles, after sufficient time the system evolves into a superposition state with the ball in macroscopically different locations. We use von Neumann's 1929 Quantum Ergodic Theorem as a tool to deduce this dynamical result.
The natural state of a complex quantum system is a superposition ("Schrodinger cat state"!), absent mysterious wavefunction collapse, which has yet to be fully defined either in logical terms or explicit dynamics. Indeed wavefunction collapse may not be necessary to explain the phenomenology of quantum mechanics. This is the underappreciated meaning of work on decoherence dating back to Zeh and Everett. See talk slides linked here, or the introduction of this paper.
We also derive some new (sharper) concentration of measure bounds that can be applied to small systems (e.g., fewer than 10 qubits).
Fun fact: Professor Buniy was a postdoc in my group at Oregon. Before coming to the US for graduate school in theoretical physics he was among the last group of young men to serve in the Soviet Army (Strategic Missile Forces IIRC!)
I suppose he has a document like this one:
Here he is in 2011, working on the null energy condition and instabilities in quantum field theories:
Quantum gravitational effects suggest a minimal length, or spacetime interval, of order the Planck length. This in turn suggests that Hilbert space itself may be discrete rather than continuous. One implication is that quantum states with norm below some very small threshold do not exist. The exclusion of what Everett referred to as maverick branches is necessary for the emergence of the Born Rule in no collapse quantum mechanics. We discuss this in the context of quantum gravity, showing that discrete models (such as simplicial or lattice quantum gravity) indeed suggest a discrete Hilbert space with minimum norm. These considerations are related to the ultimate level of fine-graining found in decoherent histories (of spacetime geometry plus matter fields) produced by quantum gravity.
From the Discussion:
No collapse (or many worlds) versions of quantum mechanics are often characterized as extravagant, because of the many branches of the wavefunction. However it is also extravagant to postulate that spacetime or Hilbert space are infinitely continuous. Continuous Hilbert space requires that for any two choices of orientation of a qubit spin (see Figure 1), no matter how close together, there are an infinite number of physically distinct states between them, with intermediate orientation. Instead, there may only be a finite (but very large) number of distinct orientations allowed, suggesting a minimum norm in Hilbert space. No experiment can probe absolute continuity, and indeed there seem to be fundamental limits on such experiments, arising from quantum gravity itself.
We illustrated a direct connection between discrete spacetime (the simplex length a) and discrete Hilbert space (minimum non-zero distance in Hilbert space produced by time evolution), in a specific class of quantum gravity models based on Feynman path integrals. It may be the case that maximally fine-grained decoherent histories generated within quantum gravity have discrete geometries and exist in a discrete Hilbert space. Consequently histories with sufficiently small norm are never generated, thereby solving Everett's problem with maverick branches. In the remaining branches, deviations from Born Rule probabilities are almost entirely hidden from semi-classical observers. ...
The Simulation Hypothesis is the idea that our universe might be part of a simulation: we are not living in base reality. (See, e.g., earlier discussion here.)
There are many versions of the argument supporting this hypothesis, which has become more plausible (or at least more popular) over time as computational power, and our familiarity with computers and virtual worlds within them, has increased.
Modern cosmology suggests that our universe, our galaxy, and our solar system, have billions of years ahead of them, during which our civilization (currently only ~10ky old!), and others, will continue to evolve. It seems reasonable that technology and science will continue to advance, delivering ever more advanced computational platforms. Within these platforms it is likely that quasi-realistic simulations, of our world, or of imagined worlds (e.g., games), will be created, many populated by AI agents or avatars. The number of simulated beings could eventually be much larger than the number of biologically evolved sentient beings. Under these assumptions, it is not implausible that we ourselves are actually simulated beings, and that our world is not base reality.
One could object to using knowledge about our (hypothetically) simulated world to reason about base reality. However, the one universe that we have direct observational contact with seems to permit the construction of virtual worlds with large populations of sentient beings. While our simulation may not be entirely representative of base reality, it nevertheless may offer some clues as to what is going on "outside"!
The simulation idea is very old. It is almost as old as computers themselves. However, general awareness of the argument has increased significantly, particularly in the last decade. It has entered the popular consciousness, transcending its origins in the esoteric musings of a few scientists and science fiction authors.
The concept of a quantum computer is relatively recent -- one can trace the idea back to Richard Feynman's early-1980s Caltech course: Physical Limits to Computation. Although quantum computing has become a buzzy part of the current hype cycle, very few people have any deep understanding of what a quantum computer actually is, and why it is different from a classical computer. A prerequisite for this understanding is a grasp of both the physical and mathematical aspects of quantum mechanics, which very few possess. Individuals who really understand quantum computing tend to have backgrounds in theoretical physics, physics, or perhaps computer science or mathematics.
The possibility of quantum computers requires that we reformulate the Simulation Hypothesis in an important way. If one is willing to posit future computers of gigantic power and complexity, why not quantum computers of arbitrary power? And why not simulations which run on these quantum computers, making use of quantum algorithms? After all, it was Feynman's pioneering observation that certain aspects of the quantum world (our world!) are more efficiently simulated using a quantum computer than a classical (e.g., Turing) machine. (See quantum extension of the Church-Turing thesis.) Hence the original Simulation Hypothesis should be modified to the Quantum Simulation Hypothesis: Do we live in a quantum simulation?
There is an important consequence for those living in a quantum simulation: they exist in a quantum multiverse. That is, in the (simulated) universe, the Many Worlds description of quantum mechanics is realized. (It may also be realized in base reality, but that is another issue...) Within the simulation, macroscopic, semiclassical brains perceive only one branch of the almost infinite number of decoherent branches of the multiverse. But all branches are realized in the execution of the unitary algorithm running on qubits. The power of quantum computing, and the difficulty of its realization, both derive from the requirement that entanglement and superposition be maintained in execution.
Given sufficiently powerful tools, the beings in the simulation could test whether quantum evolution of qubits under their control is unitary, thereby verifying the absence of non-unitary wavefunction collapse, and the existence of other branches (see, e.g., Deutsch 1986).
We can give an anthropic version of the argument as follows.
1. The physical laws and cosmological conditions of our universe seem to permit the construction of large numbers of virtual worlds containing sentient beings.
2. These simulations could run on quantum computers, and in fact if the universe being simulated obeys the laws of quantum physics, the hardware of choice is a quantum computer. (Perhaps the simulation must be run on a quantum computer!)
If one accepts points 1 and 2 as plausible, then: Conditional on the existence of sentient beings who have discovered quantum physics (i.e., us), the world around them is likely to be a simulation running on a quantum computer. Furthermore, these beings exist on a branch of the quantum multiverse realized in the quantum computer, obeying the rules of Many Worlds quantum mechanics. The other branches must be there, realized in the unitary algorithm running on (e.g., base reality) qubits.
In quantum mechanics the state of the universe evolves deterministically: the state of the entire universe at time zero fully determines its state at any later time. It is difficult to reconcile this observation with our experience as macroscopic, nearly classical, beings. To us it seems that there are random outcomes: the state of an electron (spin-up in the z direction) does not in general determine the outcome of a measurement of its spin (x direction measurement probability 1/2 of either spin up or down). This is because our brains (information processing devices) are macroscopic: one macroscopic state (memory record) is associated with the spin up outcome, which rapidly loses contact (decoheres) from the other macroscopic state with memory record of the spin down outcome. Nevertheless, the universe state, obtained from deterministic Schrodinger evolution of the earlier state, is a superposition:
| brain memory recorded up, spin up >
+
| brain memory recorded down, spin down >.
We are accustomed to thinking about classical information processing machines: brains and computers. However, with the advent of quantum computers a new possibility arises: a device which (necessarily) resides in a superposition state, and uses superposition as an integral part of its information processing.
What can we say about this kind of (quantum) intelligence? Must it be "artificial"? Could there be a place in the multiverse where evolved biological beings use superposition and entanglement as a resource for information processing?
Any machine of the type described above must be vast and cold. Vast, because many qubits are required for self-awareness and consciousness (just as many bits are required for classical AI). Cold, because decoherence destroys connections across superpositions. Too much noise (heat), and it devolves back to isolated brains, residing on decohered branches of the wavefunction.
One could regard human civilization as a single intelligence or information processing machine. This intelligence is rapidly approaching the point where it will start to use entanglement as a significant resource. It is vast, and (in small regions -- in physics labs) cold enough. We can anticipate more and larger quantum computers distributed throughout our civilization, making greater and greater use of nearby patches of the multiverse previously inaccessible.
Perhaps some day a single quantum computer might itself be considered intelligent -- the first of new kind!
What will it think?
Consciousness in a mini multiverse... Thoughts which span superpositions.
This change in zeitgeist makes the thought experiment proposed below much less outlandish. What, exactly, does Gork perceive? Why couldn't you be Gork? (Note that the AGI in Gork can be an entirely classical algorithm even though he exists in a quantum simulation.)
Slide from this [Caltech IQI] talk. See also illustrations in Big Ed.
Survey questions:
1) Could you be Gork the robot? (Do you split into different branches after observing the outcome of, e.g., a Stern-Gerlach measurement?)
2) If not, why? e.g.,
I have a soul and Gork doesn't! Copenhagen people, please use exit on your left.
Decoherence solved all that! Sorry, try again. See previous post.
I don't believe that quantum computers will work as designed, e.g., sufficiently large algorithms or subsystems will lead to real (truly irreversible) collapse. Macroscopic superpositions that are too big (larger than whatever was done in the lab last week!) are impossible.
QM is only an algorithm for computing probabilities -- there is no reality to the quantum state or wavefunction or description of what is happening inside a quantum computer. Tell this to Gork!
Stop bothering me -- I only care about real stuff like the Higgs mass / SUSY-breaking scale / string Landscape / mechanism for high-Tc / LIBOR spread / how to generate alpha.
[ 2018: Ha Ha -- first 3 real stuff topics turned out to be pretty boring use of the last decade... ]
Just as A. and B. above have become less outlandish assumptions, our ability to create large and complex superposition states with improved technology (largely developed for quantum computing; see Schrodinger's Virus) will make the possibility that we ourselves exist in a superposition state less shocking. Future generations of physicists will wonder why it took their predecessors so long to accept Many Worlds.
Bonus!I will be visiting Caltech next week (Tues and Weds 1/8-9). Any blog readers interested in getting a coffee or beer please feel free to contact me :-)
In this public lecture Weinberg explains the problems with the two predominant interpretations of quantum mechanics, which he refers to as Instrumentalist (e.g., Copenhagen) and Realist (e.g., Many Worlds). The term "interpretation" may be misleading because what is ultimately at stake is the nature of physical reality. Both interpretations have serious problems, but the problem with Realism (in Weinberg's view, and my own) is not the quantum multiverse, but rather the origin of probability within deterministic Schrodinger evolution. Instrumentalism is, of course, ill-defined nutty mysticism 8-)
Physicists will probably want to watch this at 1.5x or 2x speed. The essential discussion is at roughly 22-40min, so it's only a 10 minute investment of your time. These slides explain in pictures.
It is a shame that very few working physicists, even theoreticians, have thought carefully and deeply about quantum foundations. Perhaps Weinberg's fine summary will stimulate greater awareness of this greatest of all unresolved problems in science.
and quoted Weinberg:
... today there is no interpretation of quantum mechanics that does not have serious flaws.
Posts on this blog related to the Born Rule, etc., and two of my papers:
Google knows enough about me that my YouTube feed now routinely suggests content of real interest. A creepy but positive development ;-)
Today YouTube suggested this video of Murray Gell-Mann talking about Everett, decoherence, and quantum mechanics. I had seen this video on another web site years ago and blogged about it (post reproduced below), but now someone has uploaded it to YouTube.
After the talk I had a long conversation with John Preskill about many worlds, and he pointed out to me that both Feynman and Gell-Mann were strong advocates: they would go so far as to browbeat visitors on the topic. In fact, both claimed to have invented the idea independently of Everett.
This site is a treasure trove of interesting video interviews -- including with Francis Crick, Freeman Dyson, Sydney Brenner, Marvin Minsky, Hans Bethe, Donald Knuth, and others. Many of the interviews have transcripts, which are much faster to read than listening to the interviews themselves.
Here's what Murray Gell-Mann has to say about quantum foundations:
In '63…'64 I worked on trying to understand quantum mechanics, and I brought in Felix Villars and for a while some comments... there were some comments by Dick Feynman who was nearby. And we all agreed on a rough understanding of quantum mechanics and the second law of thermodynamics and so on and so on, that was not really very different from what I'd been working on in the last ten or fifteen years.
I was not aware, and I don't think Felix was aware either, of the work of Everett when he was a graduate student at Princeton and worked on this, what some people have called 'many worlds' idea, suggested more or less by Wheeler. Apparently Everett was, as we learned at the Massagon [sic] meeting, Everett was an interesting person. He… it wasn't that he was passionately interested in quantum mechanics; he just liked to solve problems, and trying to improve the understanding of quantum mechanics was just one problem that he happened to look at. He spent most of the rest of his life working for the Weapon System Evaluation Group in Washington, WSEG, on military problems. Apparently he didn't care much as long as he could solve some interesting problems! [Some of these points, concerning Everett's life and motivations, and Wheeler's role in MW, are historically incorrect.]
Anyway, I didn't know about Everett's work so we discovered our interpretation independent of Everett. Now maybe Feynman knew about… about Everett's work and when he was commenting maybe he was drawing upon his knowledge of Everett, I have no idea, but… but certainly Felix and I didn't know about it, so we recreated something related to it.
Now, as interpreted by some people, Everett's work has two peculiar features: one is that this talk about many worlds and equally… many worlds equally real, which has confused a lot of people, including some very scholarly students of quantum mechanics. What does it mean, 'equally real'? It doesn't really have any useful meaning. What the people mean is that there are many histories of the… many alternative histories of the universe, many alternative course-grained, decoherent histories of the universe, and the theory treats them all on an equal footing, except for their probabilities. Now if that's what you mean by equally real, okay, but that's all it means; that the theory treats them on an equal footing apart from their probabilities. Which one actually happens in our experience, is a different matter and it's determined only probabilistically. Anyway, there's considerable continuity between the thoughts of '63-'64 and the thoughts that, and… and maybe earlier in the ‘60s, and the thoughts that Jim Hartle and I have had more recently, starting around '84-'85.
Indeed, Feynman was familiar with Everett's work -- see here and here.
Where Murray says "it's determined only probabilistically" I would say there is a subjective probability which describes how surprised one is to find oneself on a particular decoherent branch or history of the overall wavefunction -- i.e., how likely or unlikely we regard the outcomes we have observed to have been. For more see here.
... although the so-called Copenhagen interpretation is perfectly correct for all laboratory physics, laboratory experiments and so on, it's too special otherwise to be fundamental and it sort of strains credulity. It's… it’s not a convincing fundamental presentation, correct though… though it is, and as far as quantum cosmology is concerned it's hopeless. We were just saying, we were just quoting that old saw: describe the universe and give three examples. Well, to apply the… the Copenhagen interpretation to quantum cosmology, you'd need a physicist outside the universe making repeated experiments, preferably on multiple copies of the universe and so on and so on. It's absurd. Clearly there is a definition to things happening independent of human observers. So I think that as this point of view is perfected it should be included in… in teaching fairly early, so that students aren't convinced that in order to understand quantum mechanics deeply they have to swallow some of this…very… some of these things that are very difficult to believe. But in the end of course, one can use the Copenhagen interpretations perfectly okay for experiments.
I am a Quantum Engineer, but on Sundays I have principles. — J.S. Bell
My own conclusion ... there is no interpretation of quantum mechanics that does not have serious flaws. — Steve Weinberg
I wrote this paper mainly for non-specialists: any theorist should be able to read and understand it. However, I feel the main point — that subjective probability analyses do not resolve the measure problem in many worlds quantum mechanics — is often overlooked, even by the experts.
We explain the measure problem (cf. origin of the Born probability rule) in no-collapse quantum mechanics. Everett defined maverick branches of the state vector as those on which the usual Born probability rule fails to hold -- these branches exhibit highly improbable behaviors, including possibly the breakdown of decoherence or even the absence of an emergent semi-classical reality. An ab initio probability measure is necessary to explain why we do not occupy a maverick branch. Derivations of the Born rule which originate in decision theory or subjective probability do not resolve this problem, because they are circular: they assume, a priori, that we reside on a non-maverick branch.
To put it very succinctly: subjective probability or decision theoretic arguments can justify the Born rule to someone living on a non-maverick branch. But they don't explain why that someone isn't on a maverick branch in the first place.
It seems to me absurd that many tens of thousands of papers have been written about the hierarchy problem in particle physics, but only a small number of theorists realize we don't have a proper (logically complete) quantum theory at the fundamental level.
I have been eagerly awaiting Steven Weinberg's Lectures on Quantum Mechanics, both because Weinberg is a towering figure in theoretical physics, and because of his cryptic comments concerning the origin of probability in no collapse (many worlds) formulations:
Einstein's Mistakes
Steve Weinberg, Physics Today, November 2005
Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wavefunction (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?
Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. [ITALICS MINE. THIS REMINDS OF FERMAT'S COMMENT IN THE MARGIN!] It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wavefunction, the Schrödinger equation, to observers and their apparatus. The difficulty is not that quantum mechanics is probabilistic—that is something we apparently just have to live with. The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics. ...
Weinberg's coverage of quantum foundations in section 3.7 of the new book is consistent with what is written above, although he does not resolve the question of how probability arises from the deterministic evolution of the wavefunction. (See here for my discussion, which involves, among other things, the distinction between objective and subjective probabilities; the latter can arise even in a deterministic universe).
1. He finds Copenhagen unsatisfactory: it does not allow QM to be applied to the observer and measuring process; it does not have a clean dividing line between observer and system.
2. He finds many worlds (no collapse, decoherent histories, etc.) unsatisfactory not because of the so-called basis problem (he accepts the unproved dynamical assumption that decoherence works as advertised), but rather because of the absence of a satisfactory origin of the Born rule for probabilities. (In other words, he doesn't elaborate on the "considerable progress..." alluded to in his 2005 essay!)
Weinberg's concluding paragraph:
There is nothing absurd or inconsistent about the ... general idea that the state vector serves only as a predictor of probabilities, not as a complete description of a physical system. Nevertheless, it would be disappointing if we had to give up the "realist" goal of finding complete descriptions of physical systems, and of using this description to derive the Born rule, rather than just assuming it. We can live with the idea that the state of a physical system is described by a vector in Hilbert space rather than by numerical values of the positions and momenta of all the particles in the system, but it is hard to live with no description of physical states at all, only an algorithm for calculating probabilities. My own conclusion (not universally shared) is that today there is no interpretation of quantum mechanics that does not have serious flaws [italics mine] ...
It is a shame that very few working physicists, even theoreticians, have thought carefully and deeply about quantum foundations. Perhaps Weinberg's fine summary will stimulate greater awareness of this greatest of all unresolved problems in science.
"I am a Quantum Engineer, but on Sundays I have principles." -- J.S. Bell
The excerpt below is from the excellent biography Climbing the Mountain by Mehra and Milton. Milton was one of Schwinger's last Harvard grad students, eventually a professor at the University of Oklahoma. Schwinger's view is the one shared by all reasonable physicists: quantum mechanics must apply to the measuring device as well as that which is measured. Once this assumption is made (as Hawking and others have noted): many worlds follows trivially.
(p.369) Schwinger: "To me, the formalism of quantum mechanics is not just mathematics; rather it is a symbolic account of the realities of atomic measurements. That being so, no independent quantum theory of measurement is required -- it is part and parcel of the formalism.
[ ... recapitulates usual von Neumann formulation: unitary evolution of wavefunction under "normal" circumstances; non-unitary collapse due to measurement ... discusses paper hypothesizing stochastic (dynamical) wavefunction collapse ... ]
In my opinion, this is a desperate attempt to solve a non-existent problem, one that flows from a false premise, namely the vN dichotomization of quantum mechanics. Surely physicists can agree that a microscopic measurement is a physical process, to be described as would any physical process, that is distinguished only by the effective irreversibility produced by amplification to the macroscopic level. ..."
Similar views have been expressed by Feynman and Gell-Mann and by Steve Weinberg. Interestingly, this chapter in the biography seems to describe (in slightly odd language) some Schwinger work on decoherence, analyzing a collaborator's claim that Stern-Gerlach beams could be recombined coherently.
Schwinger's precocity, explored in the biography in far greater detail than I had seen before, is overwhelming. At age 17 or so he had read everything there was to read about quantum mechanics, early field theory, nuclear and atomic physics. For example, he had read and understood Dirac's papers, had invented the interaction picture basis, had already read the Einstein, Podolsky, Rosen paper and explained it to Rabi when they first met. He met Bethe and they discussed a problem in quantum scattering (Schwinger had improved Bethe's well-known result and noticed an error that no other theorist had). Bethe later wrote that the 17 year old Schwinger's grasp of quantum electrodynamics was at least as good as his own.
Feyerabend on the giants: "... The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrodinger, Boltzmann, Mach and so on. ..."
Schwinger survived both Feynman and Tomonaga, with whom he shared the Nobel prize for quantum electrodynamics. He began his eulogy for Feynman: "I am the last of the triumvirate ..."
This site is a treasure trove of interesting video interviews -- including with Francis Crick, Freeman Dyson, Sydney Brenner, Marvin Minsky, Hans Bethe, Donald Knuth, and others. Many of the interviews have transcripts, which are much faster to read than listening to the interviews themselves.
Here's what Murray Gell-Mann has to say about quantum foundations:
In '63…'64 I worked on trying to understand quantum mechanics, and I brought in Felix Villars and for a while some comments... there were some comments by Dick Feynman who was nearby. And we all agreed on a rough understanding of quantum mechanics and the second law of thermodynamics and so on and so on, that was not really very different from what I'd been working on in the last ten or fifteen years.
I was not aware, and I don't think Felix was aware either, of the work of Everett when he was a graduate student at Princeton and worked on this, what some people have called 'many worlds' idea, suggested more or less by Wheeler. Apparently Everett was, as we learned at the Massagon [sic] meeting, Everett was an interesting person. He… it wasn't that he was passionately interested in quantum mechanics; he just liked to solve problems, and trying to improve the understanding of quantum mechanics was just one problem that he happened to look at. He spent most of the rest of his life working for the Weapon System Evaluation Group in Washington, WSEG, on military problems. Apparently he didn't care much as long as he could solve some interesting problems! [Some of these points, concerning Everett's life and motivations, and Wheeler's role in MW, are historically incorrect.]
Anyway, I didn't know about Everett's work so we discovered our interpretation independent of Everett. Now maybe Feynman knew about… about Everett's work and when he was commenting maybe he was drawing upon his knowledge of Everett, I have no idea, but… but certainly Felix and I didn't know about it, so we recreated something related to it.
Now, as interpreted by some people, Everett's work has two peculiar features: one is that this talk about many worlds and equally… many worlds equally real, which has confused a lot of people, including some very scholarly students of quantum mechanics. What does it mean, 'equally real'? It doesn't really have any useful meaning. What the people mean is that there are many histories of the… many alternative histories of the universe, many alternative course-grained, decoherent histories of the universe, and the theory treats them all on an equal footing, except for their probabilities. Now if that's what you mean by equally real, okay, but that's all it means; that the theory treats them on an equal footing apart from their probabilities. Which one actually happens in our experience, is a different matter and it's determined only probabilistically. Anyway, there's considerable continuity between the thoughts of '63-'64 and the thoughts that, and… and maybe earlier in the ‘60s, and the thoughts that Jim Hartle and I have had more recently, starting around '84-'85.
Indeed, Feynman was familiar with Everett's work -- see here and here.
Where Murray says "it's determined only probabilistically" I would say there is a subjective probability which describes how surprised one is to find oneself on a particular decoherent branch or history of the overall wavefunction -- i.e., how likely or unlikely we regard the outcomes we have observed to have been. For more see here.
... although the so-called Copenhagen interpretation is perfectly correct for all laboratory physics, laboratory experiments and so on, it's too special otherwise to be fundamental and it sort of strains credulity. It's… it’s not a convincing fundamental presentation, correct though… though it is, and as far as quantum cosmology is concerned it's hopeless. We were just saying, we were just quoting that old saw: describe the universe and give three examples. Well, to apply the… the Copenhagen interpretation to quantum cosmology, you'd need a physicist outside the universe making repeated experiments, preferably on multiple copies of the universe and so on and so on. It's absurd. Clearly there is a definition to things happening independent of human observers. So I think that as this point of view is perfected it should be included in… in teaching fairly early, so that students aren't convinced that in order to understand quantum mechanics deeply they have to swallow some of this…very… some of these things that are very difficult to believe. But in the end of course, one can use the Copenhagen interpretations perfectly okay for experiments.
I've been corresponding with a German theoretical physicist ("R") recently about quantum mechanics and thought I would share some of it here.
[R] Dear Prof.Hsu: I enjoyed reading your recent, very clearly written paper On the origin of probability in quantum mechanics very much. I discussed its subject matter oftentimes with Hans-Dieter Zeh ... We both think that many worlds is an idea that is probably true in some sense.
[ME] I have corresponded with Dieter over the years and read most (all?) of his work in this area. I would say we do not really disagree about anything. To me many worlds (MW) is very appealing and should really be considered the "minimal" interpretation of QM since I do not know of any other logically complete interpretations. However, anyone who endorses MW should think very carefully about the origin of probability. Since MW is really a deterministic theory (at least from the viewpoint of a "global" observer not subject to decoherence), the only kind of probabilities it allows are subjective ones. It is disturbing to me that most versions of me in the multiverse do not believe in the Born Rule (and probably then don't believe in QM!). MW proponents (e.g., Deutsch) would like to argue that, subjectively, I should not be "surprised" to be one of the few versions of me that see experimental verification of the Born Rule, but I am still uncomfortable about this. (The use of "most" above implies adopting a measure, and that is the root of all problems here.) I hope this helps -- all I've done in the above paragraphs is recapitulate the paper you already read!
[ME] The "subjective" nature of probability is because the theory is actually deterministic. (Einstein would have liked it, except for the many branches in the wavefunction.)
Let's suppose you live in a deterministic world and are about to flip a coin. You assign a probability to the outcome because you don't know what it will be. In secret, the outcome is already determined. To you, the process appears probabilistic, but really it is not. That is actually how MW works, but this is not widely appreciated. See esp. eqn 4 and figure in my paper.
Copenhagen is not logically complete because it does not explain how QM applies to the particles in the observer (which is always treated classically). Collapse theories have different physical predictions than MW because collapse is not unitary.
[R] Without going into the details, it seems absolutely clear to me that the main protagonists of Copenhagen, Heisenberg, Pauli, Bohr etc. did not believe that there is some explicit, QM-violating collapse mechanism. Do u agree?
[ME] I can't read the minds of the ancients. The only clear formulation is
that of von Neumann, and there a measurement outcome requires collapse = non-unitary projection.
[R] A lack of free will is actually also the way out of Bell for Gerard (t'Hooft), and he convinced me that the idea is not so crazy at all. I don't know why this loophole got so little attention in Bell experiments. What is your take?
[ME] ... it is funny that everyone (physicists should know better) assumes a priori that we have free will. For example, the Free Will Theorem guys (admittedly, they are only mathematicians ;-) take it for granted.
... Strangely, not many people understand how MWI evades Bell without non-locality. There are a couple of papers on this but they are not well appreciated. Actually the result is kind of trivial.
... MW has no problem with Bell's inequality because MW reproduces [see footnote #] the experimental predictions of the CI (Conventional or Copenhagen or Collapse Interpretation). An experimenter in a MW universe will not observe violation of Bell's inequality, or of the GHZ prediction, etc.
Does this mean that MW avoids non-locality? That depends on what you mean by non-locality (I imagine this is relevant to your H-D anecdote). On the one hand the Hamiltonian is local and the evolution of Psi is deterministic, so from that perspective there is obviously nothing non-local going on: Psi(x,t) only affects Psi(x',t') if (x',t') is in the forward lightcone of (x,t). From other perspectives one can speak of "non-local correlations" or influences, but I find this to be simply creating mystery where there is none.
More succinctly, in a deterministic theory with a local evolution equation (Schrodinger equation with local Hamiltonian), there cannot be any non-locality. Just think about the wave equation.
# The exception is macroscopic interference experiments as proposed by Deutsch that can tell the difference between reversible (unitary) and irreversible (collapse) theories. But these experiments are not yet technically feasible.
[R] No sorry, I must think beyond "just the wave equation". I must think about "result of a measurement" when facing the Bell trouble.
[ME] The great beauty of decoherence and MW is that it takes the mystery out of "measurement" and shows it to simply result from the unitary evolution of the wavefunction. There is no mystery and, indeed, everything is governed by a causal wave-like equation (Schrodinger equation).
Rather than belabor this further I will refer you to more detailed treatments like the ones below:
[Reference 36] Our explanation of the many-worlds interpretation branching in the text follows similar descriptions by Don N. Page, “The Einstein–Podolsky–Rosen physical reality is completely described by quantum mechanics,” Phys. Lett. A 91, 57–60 (1982), [Inspec] [ISI] Michael Clive Price, “The Everett FAQ,” www.hedweb.com/manworld.htm, and C. Hewitt-Horsman and V. Vedral, “Entanglement without nonlocality,” Phys. Rev. A 76, 062319-1–8 (2007).
... As I said, "non-locality" must be defined carefully. Even standard QFT can appear "non-local" to the foolish (positrons go backwards in time!). Recall that MW is the most "realistic" of all QM interpretations -- Psi contains all information (including about what is happening in a given mind, the process of measurement, etc.), and Psi evolves entirely causally in spacetime. So any mystery about this is manufactured. In the papers linked to above you can track exactly
what happens in an EPR/Bell experiment in MW and see that everything is local; but the result is trivial from the beginning if you grasp the points I made above.
My preprint http://arxiv.org/pdf/1205.1584v1.pdf (which appeared yesterday evening on arxiv) has already elicited a response from the brilliant and eccentric Lubos Motl.
Lubos believes in the "subjective" interpretation of the quantum state, so objects to the idea of a unitarily-evolving wavefunction describing all degrees of freedom in the universe. (See here for more discussion.) I, on the other hand, am willing to consider the possibility that many worlds is correct. Here is how Lubos characterizes the disagreement:
Buniy and Hsu also seem to be confused about the topics that have been covered hundreds of times on this blog. In particular, the right interpretation of the state is a subjective one. Consequently, all the properties of a state – e.g. its being entangled – are subjective as well. They depend on what the observer just knows at a given moment. Once he knows the detailed state of objects or observables, their previous entanglement becomes irrelevant.
... When I read papers such as one by Buniy and Hsu, I constantly see the wrong assumption written everything in between the lines – and sometimes inside the lines – that the wave function is an objective wave and one may objectively discuss its properties. Moreover, they really deny that the state vector should be updated when an observable is changed. But that's exactly what you should do. The state vector is a collection of complex numbers that describe the probabilistic knowledge about a physical system available to an observer and when the observer measures an observable, the state instantly changes because the state is his knowledge and the knowledge changes!
In the section of our paper on Schmidt decomposition, we write
A measurement of subsystem A which determines it to be in state ψ^(n)_A implies that the rest of the universe must be in state ψ^(n)_B. For example, A might consist of a few spins [9]; it is interesting, and perhaps unexpected, that a measurement of these spins places the rest of the universe into a particular state ψ^(n)_B. As we will see below, in the cosmological context these modes are spread throughout the universe, mostly beyond our horizon. Because we do not have access to these modes, they do not necessarily prevent us from detecting A in a superposition of two or more of the ψ^(n)_A. However, if we had sufficient access to B degrees of freedom (for example, if the relevant information differentiating between ψ^(n)_A states is readily accessible in our local environment or in our memory records), then the A system would decohere into one of the ψ^(n)_A.
This discussion makes it clear that ψ describes all possible branches of the wavefunction, including those that may have already decohered from each other: it describes not just the subjective experience of one observer, but of all possible observers. If we insist on removing decohered branches from the wavefunction (e.g., via collapse or von Neumann projection), then much of the entanglement we discuss in the paper is also excised. However, if we only remove branches that are inconsistent with the observations of a specific single observer, most of it will remain. Note decoherence is a continuous and (in principle) reversible phenomenon, so (at least within a unitary framework) there is no point at which one can say two outcomes have entirely decohered -- one can merely cite the smallness of overlap between the two branches or the level of improbability of interference between them.
I don't think Lubos disagrees with the mathematical statements we make about the entanglement properties of ψ. He may claim that these entanglement properties are not subject to experimental test. At least in principle, one can test whether systems A and B, which are in two different horizon volumes at cosmological time t1, are entangled. We have to wait until some later time t2, when there has been enough time for classical communication between A and B, but otherwise the protocol for determining entanglement is the usual one.
If we leave aside cosmology and consider, for example, the atoms or photons in a box, the same formalism we employ shows that there is likely to be widespread entanglement among the particles. In principle, an experimentalist who is outside the box can test whether the state ψ describing the box is "typical" (i.e., highly entangled) by making very precise measurements.
In his PhD dissertation, Charles Misner, following a suggestion from his advisor John Wheeler, formulates quantum gravity in terms of the path integral. This article has a very clear explanation for why the Hamiltonian operator in GR is zero.
Of course, in this kind of formulation the "wavefunction of the universe" plays a central role, and the universe is necessarily a closed system. There is no appealing to outside "observers" for help!
Misner was a contemporary of Everett, and played a role in the development of many worlds quantum mechanics. See here for Dieter Zeh's discussion of the 1957 Chapel Hill meeting where Everett's interpretation and quantum gravity were both discussed.
Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman's (early) thoughts on many worlds and his exposure to Everett's ideas, which apparently occurred even before their publication (see below).
Some interesting discussion by Turing biographer and mathematical physicist Andrew Hodges of Turing's early thoughts about the brain as a quantum computer and the possible connection to quantum measurement. I doubt the brain makes use of quantum coherence (i.e., it can probably be efficiently simulated by a Turing machine), but nevertheless these thoughts led Turing to the fundamental problems of quantum mechanics. He came close to noticing that a quantum computer might be outside the class of machines that a Universal Turing Machine could efficiently simulate.
Hodges' Enigma (biography of Turing) is an incredible triumph. Turing's life was tragic, but at least he was granted a biographer worthy of his contributions to mankind.
A shorter precis of Turing's life and thought, also by Hodges, can be found here.
Hodges: ... Turing described the universal machine property, applying it to the brain, but said that its applicability required that the machine whose behaviour is to be imitated
…should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.
... Turing here is discussing the possibility that, when seen as as a quantum-mechanical machine rather than a classical machine, the Turing machine model is inadequate. The correct connection to draw is not with Turing's 1938 work on ordinal logics, but with his knowledge of quantum mechanics from Eddington and von Neumann in his youth. Indeed, in an early speculation, influenced by Eddington, Turing had suggested that quantum mechanical physics could yield the basis of free-will (Hodges 1983, p. 63). Von Neumann's axioms of quantum mechanics involve two processes: unitary evolution of the wave function, which is predictable, and the measurement or reduction operation, which introduces unpredictability. Turing's reference to unpredictability must therefore refer to the reduction process. The essential difficulty is that still to this day there is no agreed or compelling theory of when or how reduction actually occurs. (It should be noted that ‘quantum computing,’ in the standard modern sense, is based on the predictability of the unitary evolution, and does not, as yet, go into the question of how reduction occurs.) It seems that this single sentence indicates the beginning of a new field of investigation for Turing, this time into the foundations of quantum mechanics. In 1953 Turing wrote to his friend and student Robin Gandy that he was ‘trying to invent a new Quantum Mechanics but it won't really work.’
[ Advances in the theory of decoherence and in experimental abilities to precisely control quantum systems have led to a much better understanding of quantum measurement. The unanswered question is, of course, whether wavefunctions actually collapse or whether they merely appear to do so. ]
At Turing's death in June 1954, Gandy reported in a letter to Newman on what he knew of Turing's current work (Gandy 1954). He wrote of Turing having discussed a problem in understanding the reduction process, in the form of
…‘the Turing Paradox’; it is easy to show using standard theory that if a system start in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, 1 second, tends to one as N tends to infinity; i.e. that continual observation will prevent motion. Alan and I tackled one or two theoretical physicists with this, and they rather pooh-poohed it by saying that continual observation is not possible. But there is nothing in the standard books (e.g., Dirac's) to this effect, so that at least the paradox shows up an inadequacy of Quantum Theory as usually presented. ...
[ This is sometimes referred to as the Quantum Zeno Effect. A modern understanding of measurement incorporating decoherence shows that this is not really a paradox. ]
In a similar way Turing found a home in Cambridge mathematical culture, yet did not belong entirely to it. The division between 'pure' and 'applied' mathematics was at Cambridge then as now very strong, but Turing ignored it, and he never showed mathematical parochialism. If anything, it was the attitude of a Russell that he acquired, assuming that mastery of so difficult a subject granted the right to invade others. Turing showed little intellectual diffidence once in his stride: in March 1933 he acquired Russell's Introduction to Mathematical Philosophy, and on 1 December 1933, the philosopher R. B. Braithwaite minuted in the Moral Science Club records: 'A. M. Turing read a paper on 'Mathematics and logic.' He suggested that a purely logistic view of mathematics was inadequate; and that mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.' At the same time he was studying von Neumann's 1932 Grundlagen den Quantenmechanik. Thus, it may be that Eddington's claims for quantum mechanics had encouraged the shift of Turing's interest towards logical foundations. And it was logic that made Alan Turing's name.