Showing posts with label von Neumann. Show all posts
Showing posts with label von Neumann. Show all posts

Sunday, June 12, 2022

Von Neumann: The Interaction of Mathematics and Computing, Stan Ulam 1976 talk (video)

 

Von Neumann: The Interaction of Mathematics and Computing, by Stan Ulam. 

See A History of Computing in the Twentieth Century, Edited by: N. METROPOLIS, J. HOWLETT and GIAN-CARLO ROTA.
 
More videos from the conference here. (Konrad Zuse!)

See at 50 minutes for an interesting story about von Neumann's role in the implosion mechanism for atomic bombs. vN apparently solved the geometrical problem for the shape of the explosive lens overnight after hearing a seminar on the topic. Still classified?
To solve this problem, the Los Alamos team planned to produce an “explosive lens”, a combination of different explosives with different shock wave speeds. When molded into the proper shape and dimensions, the high-speed and low-speed shock waves would combine with each other to produce a uniform concave pressure wave with no gaps. This inwardly-moving concave wave, when it reached the plutonium sphere at the center of the design, would instantly squeeze the metal to at least twice the density, producing a compressed ball of plutonium that contained about 5 times the necessary critical mass. A nuclear explosion would then result.
More here.

Friday, December 03, 2021

Adventures of a Mathematician: Ulam, von Karman, Wiener, and the Golem

 

Ulam's Adventures of a Mathematician was recently made into a motion picture -- see trailer above. 

I have an old copy purchased from the Caltech bookstore. When I flip through the book it never fails to reward with a wonderful anecdote from an era of giants.
[Ulam] ... In Israel many years later, while I was visiting the town of Safed with von Kárman, an old Orthodox Jewish guide with earlocks showed me the tomb of Caro in an old graveyard. When I told him that I was related to a Caro, he fell on his knees... Aunt Caro was directly related to the famous Rabbi Loew of sixteenth-century Prague, who, the legend says, made the Golem — the earthen giant who was protector of the Jews. (Once, when I mentioned this connection with the Golem to Norbert Wiener, he said, alluding to my involvement with Los Alamos and with the H-bomb, "It is still in the family!")



See also von Neumann: "If only people could keep pace with what they create"
One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari: 
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country. The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..." 
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills. 
In his obituary for John von Neumann, Ulam recalled a conversation with vN about the 
"... ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." 
This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)
More Ulam from this blog, including:
[p.107] I told Banach about an expression Johnny had used with me in Princeton before stating some non-Jewish mathematician's result, "Die Goim haben den folgenden satz beweisen" (The goys have proved the following theorem). Banach, who was pure goy, thought it was one of the funniest sayings he had ever heard. He was enchanted by its implication that if the goys could do it, then Johnny and I ought to be able to do it better. Johnny did not invent this joke, but he liked it and we started using it.

Thursday, June 03, 2021

Macroscopic Superpositions in Isolated Systems (talk video + slides)

 

This is video of a talk based on the paper
Macroscopic Superpositions in Isolated Systems 
R. Buniy and S. Hsu 
arXiv:2011.11661, to appear in Foundations of Physics 
For any choice of initial state and weak assumptions about the Hamiltonian, large isolated quantum systems undergoing Schrodinger evolution spend most of their time in macroscopic superposition states. The result follows from von Neumann's 1929 Quantum Ergodic Theorem. As a specific example, we consider a box containing a solid ball and some gas molecules. Regardless of the initial state, the system will evolve into a quantum superposition of states with the ball in macroscopically different positions. Thus, despite their seeming fragility, macroscopic superposition states are ubiquitous consequences of quantum evolution. We discuss the connection to many worlds quantum mechanics.
Slides for the talk.

See this earlier post about the paper:
It may come as a surprise to many physicists that Schrodinger evolution in large isolated quantum systems leads generically to macroscopic superposition states. For example, in the familiar Brownian motion setup of a ball interacting with a gas of particles, after sufficient time the system evolves into a superposition state with the ball in macroscopically different locations. We use von Neumann's 1929 Quantum Ergodic Theorem as a tool to deduce this dynamical result. 

The natural state of a complex quantum system is a superposition ("Schrodinger cat state"!), absent mysterious wavefunction collapse, which has yet to be fully defined either in logical terms or explicit dynamics. Indeed wavefunction collapse may not be necessary to explain the phenomenology of quantum mechanics. This is the underappreciated meaning of work on decoherence dating back to Zeh and Everett. See talk slides linked here, or the introduction of this paper.

We also derive some new (sharper) concentration of measure bounds that can be applied to small systems (e.g., fewer than 10 qubits). 

Related posts:




Friday, March 26, 2021

John von Neumann, 1966 Documentary

 

This 1966 documentary on von Neumann was produced by the Mathematical Association of America. It includes interviews with Wigner, Ulam, Halmos, Goldstine, and others. 

At ~34m Bethe (leader of the Los Alamos theory division) gives primary credit to vN for the implosion method in fission bombs. While vN's previous work on shock waves and explosive lenses is often acknowledged as important for solving the implosion problem, this is the first time I have seen him given credit for the idea itself. Seth Neddermeyer's Enrico Fermi Award citation gives him credit for "invention of the implosion technique" and the original solid core design was referred to as the "Christy gadget" after Robert Christy. As usual, history is much more complicated than the simplified narrative that becomes conventional.
Teller: He could and did talk to my three-year-old son on his own terms and I sometimes wondered whether his relations to the rest of us were a little bit similar.
A recent application of vN's Quantum Ergodic Theorem: Macroscopic Superposition States in Isolated Quantum Systems.

Cloning vN (science fiction): short story, longer (AI vs genetic engineering).

Saturday, May 09, 2020

Pure State Quantum Thermalization: from von Neumann to the Lab


Perhaps the most fundamental question in thermodynamics and statistical mechanics is: Why do systems tend to evolve toward thermal equilibrium? Equivalently, why does entropy tend to increase? Because Nature is quantum mechanical, a satisfactory answer to this question has to arise within quantum mechanics itself. The answer was given already in a 1929 paper by von Neumann. However, the ideas were not absorbed (were in fact misunderstood) by the physics community and only rediscovered in the 21st century! General awareness of these results is still rather limited.

See this 2011 post: Classics on the arxiv: von Neumann and the foundations of quantum statistical mechanics.

In modern language, we would say something to the effect that "typical" quantum pure states are highly entangled, and the density matrix describing any small sub-system (obtained by tracing over the rest of the pure state) is very close to micro-canonical (i.e., thermal). Under dynamical (Schrodinger) evolution, all systems (even those that are initially far from typical) spend nearly all of their time in a typical state (modulo some weak conditions on the Hamiltonian). Typicality of states is related to concentration of measure in high dimensional Hilbert spaces. One could even claim that the origin of thermodynamics lies in the geometry of Hilbert space itself.

[ It's worth noting that vN's paper does more than just demonstrate these results. It also gives an explicit construction of macroscopic classical (commuting) observables arising in a large Hilbert space. This construction would be a nice thing to include in textbooks for students trying to connect the classical and quantum worlds. ]

Recently I came across an experimental realization of these theoretical results, using cold atoms in an optical lattice (Greiner lab at Harvard):
Quantum thermalization through entanglement in an isolated many-body system

Science 353, 794-800 (2016)    arXiv:1603.04409v3

The concept of entropy is fundamental to thermalization, yet appears at odds with basic principles in quantum mechanics. Statistical mechanics relies on the maximization of entropy for a system at thermal equilibrium. However, an isolated many-body system initialized in a pure state will remain pure during Schrodinger evolution, and in this sense has static, zero entropy. The underlying role of quantum mechanics in many-body physics is then seemingly antithetical to the success of statistical mechanics in a large variety of systems. Here we experimentally study the emergence of statistical mechanics in a quantum state, and observe the fundamental role of quantum entanglement in facilitating this emergence. We perform microscopy on an evolving quantum system, and we see thermalization occur on a local scale, while we measure that the full quantum state remains pure. We directly measure entanglement entropy and observe how it assumes the role of the thermal entropy in thermalization. Although the full state remains measurably pure, entanglement creates local entropy that validates the use of statistical physics for local observables. In combination with number-resolved, single-site imaging, we demonstrate how our measurements of a pure quantum state agree with the Eigenstate Thermalization Hypothesis and thermal ensembles in the presence of a near-volume law in the entanglement entropy.
Note, given the original vN results I think the Eigenstate Thermalization Hypothesis is only of limited interest. [ But see comments for more discussion... ] The point is that this is a laboratory demonstration of pure state thermalization, anticipated in 1929 by vN.

Another aspect of quantum thermalization that is still not very well appreciated is that approach to equilibrium can have a very different character than what students are taught in statistical mechanics. The physical picture behind the Boltzmann equation is semi-classical: collisions between atoms happen in serial as two gases equilibrate. But Schrodinger evolution of the pure state (all the degrees of freedom together) toward typicality can take advantage of quantum parallelism: all possible collisions take place on different parts of the quantum superposition state. Consequently, the timescale for quantum thermalization can be much shorter than in the semi-classical Boltzmann description.

In 2015 my postdoc C.M. Ho (now director of an AI lab in Silicon Valley) and I pointed out that quantum thermalization was likely already realized in heavy ion collisions at RHIC and CERN, and that the quantum nature of the process was responsible for the surprisingly short time required to approach equilibrium (equivalently, to generate large amounts of entanglement entropy).

Entanglement and fast thermalization in heavy ion collisions (see also slides here).


Entanglement and Fast Quantum Thermalization in Heavy Ion Collisions (arXiv:1506.03696)

Chiu Man Ho, Stephen D. H. Hsu

Let A be subsystem of a larger system A∪B, and ψ be a typical state from the subspace of the Hilbert space H_AB satisfying an energy constraint. Then ρ_A(ψ)=Tr_B |ψ⟩⟨ψ| is nearly thermal. We discuss how this observation is related to fast thermalization of the central region (≈A) in heavy ion collisions, where B represents other degrees of freedom (soft modes, hard jets, co-linear particles) outside of A. Entanglement between the modes in A and B plays a central role; the entanglement entropy S_A increases rapidly in the collision. In gauge-gravity duality, S_A is related to the area of extremal surfaces in the bulk, which can be studied using gravitational duals.



An earlier blog post Ulam on physical intuition and visualization mentioned the difference between intuition for familiar semiclassical (incoherent) particle phenomena, versus for intrinsically quantum mechanical (coherent) phenomena such as the spread of entanglement and its relation to thermalization.
[Ulam:] ... Most of the physics at Los Alamos could be reduced to the study of assemblies of particles interacting with each other, hitting each other, scattering, sometimes giving rise to new particles. Strangely enough, the actual working problems did not involve much of the mathematical apparatus of quantum theory although it lay at the base of the phenomena, but rather dynamics of a more classical kind—kinematics, statistical mechanics, large-scale motion problems, hydrodynamics, behavior of radiation, and the like. In fact, compared to quantum theory the project work was like applied mathematics as compared with abstract mathematics. If one is good at solving differential equations or using asymptotic series, one need not necessarily know the foundations of function space language. It is needed for a more fundamental understanding, of course. In the same way, quantum theory is necessary in many instances to explain the data and to explain the values of cross sections. But it was not crucial, once one understood the ideas and then the facts of events involving neutrons reacting with other nuclei.
This "dynamics of a more classical kind" did not require intuition for entanglement or high dimensional Hilbert spaces. But see von Neumann and the foundations of quantum statistical mechanics for examples of the latter.

Friday, April 17, 2020

The von Neumann-Fuchs bomb, and the radiation compression mechanism of Ulam-Teller-Sakharov


Some useful references below on the Ulam-Teller mechanism, Sakharov's Third Idea, and the von Neumann-Fuchs thermonuclear design of 1946. They resolve a mystery discussed previously on this blog:
Sakharov's Third Idea: ... If Zeldovich was already familiar with radiation pressure as the tool for compression, via the Fuchs report of 1948, then perhaps one cannot really credit Teller so much for adding this ingredient to Ulam's idea of a staged device using a fission bomb to compress the thermonuclear fuel. Fuchs and von Neumann had already proposed (and patented!) radiation implosion years before. More here.
It turns out that the compression mechanism used in the von Neumann-Fuchs design (vN is the first author on the patent application; the design was realized in the Operation Greenhouse George nuclear test of 1951) is not that of Ulam-Teller or Sakharov. In vN-F the D-T mixture reaches thermal equilibrium with ionized BeO gas, leading to a pressure increase of ~10x. This is not the "cold compression" via focused radiation pressure used in the U-T / Sakharov designs. That was, apparently,  conceived independently by Ulam-Teller and Sakharov.

It is only recently that the vN-F design has become public -- first obtained by the Soviets via espionage (Fuchs), and finally declassified and published by the Russians! It seems that Zeldovich had access to this information, but not Sakharov.

American and Soviet H-bomb development programmes: historical background by G. Goncharov.

John von Neumann and Klaus Fuchs: an Unlikely Collaboration by Jeremy Bernstein. See also here for some clarifying commentary.


A great anecdote:
Jeremy Bernstein: When I was an undergraduate at Harvard he [vN] came to the university to give lectures on the computer and the brain. They were the best lectures I have ever heard on anything — like mental champagne. After one of them I found myself walking in Harvard Square and looked up to see von Neumann. Thinking, correctly as it happened, that it would be the only chance I would have to ask him a question, I asked, ‘‘Professor von Neumann, will the computer ever replace the human mathematician?’’ He studied me and then responded, ‘‘Sonny, don’t worry about it.’’


Note added from comments: I hope this clarifies things a bit.
The question of how the Soviets got to the U-T mechanism is especially mysterious. Sakharov himself (ostensibly the Soviet inventor) was puzzled until the end about what had really happened! He did not have access to the vN-F design that has been made public from the Russian side (~2000, after Sakharov's death in 1989; still classified in US). Zeldovich and only a few others had seen the Fuchs information, at a time when the main focus of the Russian program was not the H bomb. Sakharov could never be sure whether his suggestion for cold radiation compression sparked Zeldovich's interest because the latter *had seen the idea before* without fully comprehending it. Sakharov wondered about this until the end of his life (see below), but I think his surmise was not correct: we know now that vN-F did *not* come up with that idea in their 1946 design. I've been puzzled about this question myself for some time. IF the vN-F design had used radiation pressure for cold compression, why did Teller get so much credit for replacing neutrons with radiation pressure in Ulam's staged design (1951)? I stumbled across the (now public) vN-F design by accident just recently -- I was reading some biographical stuff about Zeldovich which touched upon these issues.

https://infoproc.blogspot.com/2012/10/sakharovs-third-idea.html

Consider the following words in Sakharov’s memoirs, with a note he added toward the end of life:

Now I think that the main idea of the H-bomb design developed by the Zeldovich group was based on intelligence information. However, I can’t prove this conjecture. It occurred to me quite recently, but at the time I just gave it no thought. (Note added July 1987. David Holloway writes in “Soviet Thermonuclear Development,” International Security 4:3 (1979/80), p. 193: “The Soviet Union had been informed by Klaus Fuchs of the studies of thermonuclear weapons at Los Alamos up to 1946. … His information would have been misleading rather than helpful, because the early ideas were later shown not to work.” Therefore my conjecture is confirmed!)
Another useful resource: Gennady Gorelik (BU science historian): The Paternity of the H-Bombs: Soviet-American Perspectives
Teller, 1952, August (re Bethe’s Memorandum): The main principle of radiation implosion was developed in connection with the thermonuclear program and was stated at a con­ference on the thermonuclear bomb, in the spring of 1946. Dr. Bethe did not attend this conference, but Dr. Fuchs did. [ Original development by vN? ]

It is difficult to argue to what extent an invention is accidental: most difficult for someone who did not make the invention himself. It appears to me that the idea was a relatively slight modification of ideas generally known in 1946. Essentially only two elements had to be added: to implode a bigger volume, and, to achieve greater compression by keeping the imploded material cool as long as possible.
The last part ("cool as long as possible") refers to the fundamental difference between the vN-Fuchs design and the U-T mechanism of cold radiation compression. The former assumes thermal equilibrium between ionized gas and radiation, while latter deliberately avoids it as long as possible.

Official Soviet History: On the making of the Soviet hydrogen (thermonuclear) bomb, Yu B Khariton et al 1996 Phys.-Usp. 39 185. Some details on the origin of the compression idea, followed by the use of radiation pressure (Zeldovich and Sakharov).

Monday, June 24, 2019

Ulam on von Neumann, Godel, and Einstein


Ulam expresses so much in a few sentences! From his memoir, Adventures of a Mathematician. Above: Einstein and Godel. Bottom: von Neumann, Feynman, Ulam.
When it came to other scientists, the person for whom he [vN] had a deep admiration was Kurt Gödel. This was mingled with a feeling of disappointment at not having himself thought of "undecidability." For years Gödel was not a professor at Princeton, merely a visiting fellow, I think it was called. Apparently there was someone on the faculty who was against him and managed to prevent his promotion to a professorship. Johnny would say to me, "How can any of us be called professor when Gödel is not?" ...

As for Gödel, he valued Johnny very highly and was much interested in his views. I believe knowing the importance of his own discovery did not prevent Gödel from a gnawing uncertainty that maybe all he had discovered was another paradox à la Burali Forte or Russell. But it is much, much more. It is a revolutionary discovery which changed both the philosophical and the technical aspects of mathematics.

When we talked about Einstein, Johnny would express the usual admiration for his epochal discoveries which had come to him so effortlessly, for the improbable luck of his formulations, and for his four papers on relativity, on the Brownian motion, and on the photo-electric quantum effect. How implausible it is that the velocity of light should be the same emanating from a moving object, whether it is coming toward you or whether it is receding. But his admiration seemed mixed with some reservations, as if he thought, "Well, here he is, so very great," yet knowing his limitations. He was surprised at Einstein's attitude in his debates with Niels Bohr—at his qualms about quantum theory in general. My own feeling has always been that the last word has not been said and that a new "super quantum theory" might reconcile the different premises.

Friday, September 01, 2017

Lax on vN: "He understood in an instant"



Mathematician Peter Lax (awarded National Medal of Science, Wolf and Abel prizes), interviewed about his work on the Manhattan Project. His comments on von Neumann and Feynman:
Lax: ... Von Neumann was very deeply involved in Los Alamos. He realized that computers would be needed to carry out the calculations needed. So that was, I think, his initial impulse in developing computers. Of course, he realized that computing would be important for every highly technical project, not just atomic energy. He was the most remarkable man. I’m always utterly surprised that his name is not common, household.

It is a name that should be known to every American—in fact, every person in the world, just as the name of Einstein is. I am always utterly surprised how come he’s almost totally unknown. ... All people who had met him and interacted with him realized that his brain was more powerful than anyone’s they have ever encountered. I remember Hans Bethe even said, only half in jest, that von Neumann’s brain was a new development of the human brain. Only a slight exaggeration.

... People today have a hard time to imagine how brilliant von Neumann was. If you talked to him, after three words, he took over. He understood in an instant what the problem was and had ideas. Everybody wanted to talk to him.

...

Kelly: I think another person that you mention is Richard Feynman?

Lax: Yes, yes, he was perhaps the most brilliant of the people there. He was also somewhat eccentric. He played the bongo drums. But everybody admired his brilliance. [ vN was a consultant and only visited Los Alamos occasionally. ]
Full transcript. See also Another species, an evolution beyond man.

Tuesday, July 04, 2017

Building the Gadget: A Technical History of the Atomic Bomb


This is the best technical summary of the Los Alamos component of the Manhattan Project that I know of. It includes, for example, detail about the hydrodynamical issues that had to be overcome for successful implosion. That work drew heavily on von Neumann's expertise in shock waves, explosives, numerical solution of hydrodynamic partial differential equations, etc. A visit by G.I. Taylor alerted the designers to the possibility of instabilities in the shock front (Rayleigh–Taylor instability). Concern over these instabilities led to the solid-core design known as the Christy Gadget.
Critical Assembly: A Technical History of Los Alamos during the Oppenheimer Years, 1943-1945

... Unlike earlier histories of Los Alamos, this book treats in detail the research and development that led to the implosion and gun weapons; the research in nuclear physics, chemistry, and metallurgy that enabled scientists to design these weapons; and the conception of the thermonuclear bomb, the "Super." Although fascinating in its own right, this story has particular interest because of its impact on subsequent devel- opments. Although many books examine the implications of Los Alamos for the development of a nuclear weapons culture, this is the first to study its role in the rise of the methodology of "big science" as carried out in large national laboratories.

... The principal reason that the technical history of Los Alamos has not yet been written is that even today, after half a century, much of the original documentation remains classified. With cooperation from the Los Alamos Laboratory, we received authorization to examine all the relevant documentation. The book then underwent a classification review that resulted in the removal from this edition of all textual material judged sensitive by the Department of Energy and all references to classified documents. (For this reason, a number of quotations appear without attribution.) However, the authorities removed little information. Thus, except for a small number of technical facts, this account represents the complete story. In every instance the deleted information was strictly technical; in no way has the Los Alamos Laboratory or the Department of Energy attempted to shape our interpretations. This is not, therefore, a "company history"; throughout the research and writing, we enjoyed intellectual freedom.

... Scientific research was an essential component of the new approach: the first atomic bombs could not have been built by engineers alone, for in no sense was developing these bombs an ordinary engineering task. Many gaps existed in the scientific knowledge needed to complete the bombs. Initially, no one knew whether an atomic weapon could be made. Furthermore, the necessary technology extended well beyond the "state of the art." Solving the technical problems required a heavy investment in basic research by top-level scientists trained to explore the unknown - scientists like Hans Bethe, Richard Feynman, Rudolf Peierls, Edward Teller, John von Neumann, Luis Alvarez, and George Kistiakowsky. To penetrate the scientific phenomena required a deep understanding of nuclear physics, chemistry, explosives, and hydrodynamics. Both theoreticians and experimentalists had to push their scientific tools far beyond their usual capabilities. For example, methods had to be developed to carry out numerical hydrodynamics calculations on a scale never before attempted, and experimentalists had to expand the sensitivity of their detectors into qualitatively new regimes.

... American physics continued to prosper throughout the 1920s and1930s, despite the Depression. Advances in quantum theory stimulated interest in the microscopic structure of matter, and in 1923 Robert Millikan of Caltech was awarded the Nobel Prize for his work on electrons. In the 1930s and 1940s, Oppenheimer taught quantum theory to large numbers of students at the Berkeley campus of the University of California as well as at Caltech. Also at Berkeley in the 1930s and 1940s, the entrepreneurial Lawrence gathered chemists, engineers, and physicists together in a laboratory where he built a series of ever-larger cyclotrons and led numerous projects in nuclear chemistry, nuclear physics, and medicine. By bringing together specialists from different fields to work cooperatively on large common projects, Lawrence helped to create a distinctly American collaborative research endeavor - centered on teams, as in the industrial research laboratories, but oriented toward basic studies without immediate application. This approach flourished during World War II.

Thursday, May 25, 2017

Von Neumann, in his head


From Robert Jungk's Brighter than a Thousand Suns: A Personal History of the Atomic Scientists.

The H-bomb project:
... Immediately after the White House directive the Theoretical Division at Los Alamos had started calculations for the new bomb.

... There was a meeting in Teller's office with Fermi, von Neumann, and Feynman ... Many ideas were thrown back and forth and every few minutes Fermi or Teller would devise a quick numerical check and then they would spring into action. Feynman on the desk calculator, Fermi with the little slide rule he always had with him, and von Neumann, in his head. The head was usually first, and it is remarkable how close the three answers always checked.
The MANIAC:
... When von Neumann released his last invention for use, it aroused the admiration of all who worked with it. Carson Mark, head of the Theoretical Division at Los Alamos, recollects that 'a problem which would have otherwise kept three people busy for three months; could be solved by the aid of this computer, worked by the same three people, in about ten hours. The physicist who had set the task, instead of having to wait for a quarter of a year before he could get on, received the data he required for his further work the same evening. A whole series of such three months' calculations, narrowed down to a single working day, were needed for the production of the hydrogen bomb.

It was a calculating machine, therefore, which was the real hero of the work on the construction of the bomb. It had a name of its own, like all the other electronic brains. Von Neumann had always been fond of puns and practical jokes. When he introduced his machine to the Atomic Energy Commission under the high-sounding name of 'Mathematical Analyser, Numerical Integrator and Computer', no one noticed anything odd about this designation except that it was rather too ceremonious for everyday use. It was not until the initial letters of the six words were run together that those who used the miraculous new machine realized that the abbreviation spelled 'maniac'.

Friday, April 21, 2017

Von Neumann and Realpolitik

"Right, as the world goes, is only in question between equals in power. The strong do what they will and the weak suffer what they must." -- Thucydides, Melian Dialogue.

 

Von Neumann, Feynman, and Ulam.
Adventures of a Mathematician (Ulam): ... Once at Christmas time in 1937, we drove from Princeton to Duke University to a meeting of the American Mathematical Society. ...

As we passed the battlefields of the Civil War, Johnny recounted the smallest details of the battles. His knowledge of history was really encyclopedic, but what he liked and knew best was ancient history. He was a great admirer of the concise and wonderful way the Greek historians wrote. His knowledge of Greek enabled him to read Thucydides, Herodotus, and others in the original; his knowledge of Latin was even better.

The story of the Athenian expedition to the island of Melos, the atrocities and killings that followed, and the lengthy debates between the opposing parties fascinated him for reasons which I never quite understood. He seemed to take a perverse pleasure in the brutality of a civilized people like the ancient Greeks. For him, I think it threw a certain not-too-complimentary light on human nature in general. Perhaps he thought it illustrated the fact that once embarked on a certain course, it is fated that ambition and pride will prevent a people from swerving from a chosen course and inexorably it may lead to awful ends, as in the Greek tragedies. Needless to say this prophetically anticipated the vaster and more terrible madness of the Nazis. Johnny was very much aware of the worsening political situation. In a Pythian manner, he foresaw the coming catastrophe. ...

... I will never forget the scene a few days before he died. I was reading to him in Greek, from his worn copy of Thucydides, a story he liked especially about the Athenians' attack on Melos, and also the speech of Pericles. He remembered enough to correct an occasional mistake or mispronunciation on my part.

... there was nothing small about his interests ... He was unique in this respect. Unique, too, were his overall intelligence, breadth of interest, and absolute feeling for the difference between the momentary technical work and the great lines of the life of the mathematical tree itself and its role in human thought.

Thursday, February 16, 2017

Management by the Unusually Competent



How did we get ICBMs? How did we get to the moon? What are systems engineering and systems management? Why do some large organizations make rapid progress, while others spin their wheels for decades at a time? Dominic Cummings addresses these questions in his latest essay.

Photo above of Schriever and Ramo. More Dom.
... In 1953, a relatively lowly US military officer Bernie Schriever heard von Neumann sketch how by 1960 the United States would be able to build a hydrogen bomb weighing less than a ton and exploding with the force of a megaton, about 80 times more powerful than Hiroshima. Schriever made an appointment to see von Neumann at the IAS in Princeton on 8 May 1953. As he waited in reception, he saw Einstein potter past. He talked for hours with von Neumann who convinced him that the hydrogen bomb would be progressively shrunk until it could fit on a missile. Schriever told Gardner about the discussion and 12 days later Gardner went to Princeton and had the same conversation with von Neumann. Gardner fixed the bureaucracy and created the Strategic Missiles Evaluation Committee. He persuaded von Neumann to chair it and it became known as ‘the Teapot committee’ or ‘the von Neumann committee’. The newly formed Ramo-Wooldridge company, which became Thompson-Ramo-Wooldridge (I’ll refer to it as TRW), was hired as the secretariat.

The Committee concluded (February 1954) that it would be possible to produce intercontinental ballistic missiles (ICBMs) by 1960 and deploy enough to deter the Soviets by 1962, that there should be a major crash programme to develop them, and that there was an urgent need for a new type of agency with a different management approach to control the project. Although intelligence was thin and patchy, von Neumann confidently predicted on technical and political grounds that the Soviet Union would engage in the same race. It was discovered years later that the race had already been underway partly driven by successful KGB operations. Von Neumann’s work on computer-aided air defence systems also meant he was aware of the possibilities for the Soviets to build effective defences against US bombers.

‘The nature of the task for this new agency requires that over-all technical direction be in the hands of an unusually competent group of scientists and engineers capable of making systems analyses, supervising the research phases, and completely controlling experimental and hardware phases of the program… It is clear that the operation of this new group must be relieved of excessive detailed regulation by existing government agencies.’ (vN Committee, emphasis added.)

A new committee, the ICBM Scientific Advisory Committee, was created and chaired by von Neumann so that eminent scientists could remain involved. One of the driving military characters, General Schriever, realised that people like von Neumann were an extremely unusual asset. He said later that ‘I became really a disciple of the scientists… I felt strongly that the scientists had a broader view and had more capabilities.’ Schriever moved to California and started setting up the new operation but had to deal with huge amounts of internal politics as the bureaucracy naturally resisted new ideas. The Defense Secretary, Wilson, himself opposed making ICBMs a crash priority.

... Almost everybody hated the arrangement. Even the Secretary of the Air Force (Talbott) tried to overrule Schriever and Ramo. It displaced the normal ‘prime contractor’ system in which one company, often an established airplane manufacturer, would direct the whole programme. Established businesses were naturally hostile. Traditional airplane manufacturers were run very much on Taylor’s principles with rigid routines. TRW employed top engineers who would not be organised on Taylor’s principles. Ramo, also a virtuoso violinist, had learned at Caltech the value of a firm grounding in physics and an interdisciplinary approach in engineering. He and his partner Wooridge had developed their ideas on systems engineering before starting their own company. The approach was vindicated quickly when TRW showed how to make the proposed Atlas missile much smaller and simpler therefore cheaper and faster to develop.

... According to Johnson, almost all the proponents of systems engineering had connections with either Caltech (where von Karman taught and JPL was born) or MIT (which was involved with the Radiation Lab and other military projects during World War 2). Bell Labs, which did R&D for AT&T, was also a very influential centre of thinking. The Jet Propulsion Laboratory (JPL) managed by Caltech also, under the pressure of repeated failure, independently developed systems management and configuration control. They became technical leaders in space vehicles. NASA, however, did not initially learn from JPL.

... Philip Morse, an MIT physicist who headed the Pentagon’s Weapons Systems Evaluation Group after the war, reflected on this resistance:
‘Administrators in general, even the high brass, have resigned themselves to letting the physical scientist putter around with odd ideas and carry out impractical experiments, as long as things experimented with are solutions or alloys or neutrons or cosmic rays. But when one or more start prying into the workings of his own smoothly running organization, asking him and others embarrassing questions not related to the problems he wants them to solve, then there’s hell to pay.’ (Morse, ‘Operations Research, What is It?’, Proceedings of the First Seminar in Operations Research, November 8–10, 1951.)



The Secret of Apollo: Systems Management in American and European Space Programs, Stephen B. Johnson.

Friday, November 25, 2016

Von Neumann: "If only people could keep pace with what they create"

I recently came across this anecdote in Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900-1960.

One night in early 1945, just back from Los Alamos, vN woke in a state of alarm in the middle of the night and told his wife Klari:
"... we are creating ... a monster whose influence is going to change history ... this is only the beginning! The energy source which is now being made available will make scientists the most hated and most wanted citizens in any country.

The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create ..."
He then predicted the future indispensable role of automation, becoming so agitated that he had to be put to sleep by a strong drink and sleeping pills.

In his obituary for John von Neumann, Ulam recalled a conversation with vN about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This is the origin of the concept of technological singularity. Perhaps we can even trace it to that night in 1945 :-)

How will humans keep pace? See Super-Intelligent Humans are Coming and Don't Worry, Smart Machines Will Take Us With Them.

Friday, August 12, 2016

Greg Cochran on James Miller's Future Strategist podcast



James Miller interviews Greg Cochran on a variety of topics.

Some comments on the early part of the interview (you might need to listen to it to make sense of what I write below):

1. The prediction I've made about the consequences of additive genetic variance in intelligence is not that we'll be able to realize +30 SDs of cognitive ability. That would only be true if we could ignore pleiotropy, nonlinear corrections to the additive approximation, etc. What I claim is that because there are +30 SDs up for grabs in the first order approximation, it seems likely that at least a chunk of this will be realizable, leading to geniuses beyond those that have existed so far in human history (this is the actual claim). To doubt this conclusion one would have to argue that even, say, +8 or +10 SDs out of 30 are unrealizable, which is hard to believe since we have examples of healthy and robust individuals who are in the +6 or +7 range. (These numbers are poorly defined since the normal distribution fails to apply in the tails.)

### I could make further, more technical, arguments that originate from the fact that the genomic space is very high dimensional. These suggest that, given healthy/robust examples at +X, it is very unlikely that there is NO path in the high dimensional space to a phenotype value greater than X while holding "robustness" relatively fixed. ###

2. Greg comments on whether super smart people can have "normal" personalities. This is obviously not necessary for them to be viable contributors to civilization (and even less of an issue in a future civilization where everyone is quite a bit smarter on average). He posits that von Neumann might have been radically strange, but able to emulate an ordinary person when necessary. (The joke is that he was actually a Martian pretending to be human.) My impression from reading Ulam's autobiography, Adventures of a Mathematician (see also here), is that von Neumann was actually not that strange by the standards of mathematicians -- he was sociable, had a good sense of humor, enjoyed interactions with others and with his family. He and Ulam were close and spent a lot of time together. I suspect Ulam's portrait of vN is reasonably accurate.

3. The University of Chicago conference on genetics and behavior Greg mentions, which was hosted in James Heckman's institute, is described here, here, and here (videos).


### A masochist in the comments asked for the actual argument, so here it is: ###
Here's a simple example which I think conveys the basic idea.

Suppose you have 10k variants and that individuals with 5.5k or more + variants are at the limit of cognitive ability yet seen in history (i.e., at the one in a million or billion or whatever level). Now suppose that each of the 10k + variants comes with some deleterious effect on some other trait(s) like general health, mental stability, etc. (This is actually too pessimistic -- some will actually come with positive effects!) These deleterious effects are not uniform over the 10k variants -- for some fixed number of + variants (i.e., 5.5k) there are many different individuals with different levels of overall health/robustness.

Let the number of distinct genotypes that lead to (nearly) "maximal historical" cognitive ability be n = (number of ways to distribute 5.5k +'s over 10k variants); this is a huge number. Now, we know of many actual examples of historical geniuses who were relatively healthy and robust. The probability that these specific individuals achieved the *minimum* level of negative or deleterious effects over all n possibilities is vanishingly small. But that means that there are genotypes with *more* than 5.5k + variants at the same level of general robustness. These correspond to individuals who are healthy/robust but have greater cognitive ability than any historical genius.

You can make this argument fully realistic by dropping the assumption that + effect sizes on cognitive ability are uniform, that effects on different traits are completely additive, etc. The point is that there are so many genotypes that realize [cognitive ability ~ historical max], that the ones produced so far are unlikely to maximize overall health/robustness given that constraint. But that means there are other genotypes (off the surface of constraint) with even higher cognitive ability, yet still healthy and robust.

Thursday, April 14, 2016

The story of the Monte Carlo Algorithm



George Dyson is Freeman's son. I believe this talk was given at SciFoo or Foo Camp.

More Ulam (neither he nor von Neumann were really logicians, at least not primarily).

Wikipedia on Monte Carlo Methods. I first learned these in Caltech's Physics 129: Mathematical Methods, which used the textbook by Mathews and Walker. This book was based on lectures taught by Feynman, emphasizing practical techniques developed at Los Alamos during the war. The students in the class were about half undergraduates and half graduate students. For example, Martin Savage was a first year graduate student that year. Martin is now a heavy user of Monte Carlo in lattice gauge theory :-)

Saturday, November 28, 2015

The view from here


A mind of von Neumann's inexorable logic had to understand and accept much that most of us do not want to accept and do not even wish to understand. This fact colored many of von Neumann's moral judgments. 
-- Eugene Wigner, in John von Neumann (1903 - 1957), Year book of the American Philosophical Society (1958).


DEC 2015 NAS/NAM HUMAN GENE EDITING SUMMIT

In response to the recent widespread revolution in genome editing technology and the associated bioethical considerations, an Information Gathering Meeting for the Planning Committee Organizing the International Summit on Human Gene Editing was convened in Washington, DC, on Monday, October 5, 2015.

The event was part of the U.S. National Academies of Science, Engineering and Medicine’s Human Gene-Editing Initiative, and it was hosted an by the U.S. National Academy of Sciences (NAS), the U.S. National Academy of Medicine (NAM), the Chinese Academy of Sciences, and the Royal Society (the UK’s national academy of science).

VIDEO PRESENTATIONS



Session IV, Overview of Chinese Gene Editing Research and Policy: Duanqing Pei from The Academies on Vimeo.


Session IV, Overview of Chinese Gene Editing Research and Policy: Qi Zhou from The Academies on Vimeo.

Sunday, May 24, 2015

John Nash, dead at 86



The original title of this post was For this you won a Nobel (Memorial) Prize? But see sad news at bottom.
A Beautiful Mind: Nash went to see von Neumann a few days after he passed his generals? He wanted, he had told the secretary cockily, to discuss an idea that might be of interest to Professor von Neumann. It was a rather audacious thing for a graduate student to do. Von Neumann was a public figure, had very little contact with Princeton graduate students outside of occasional lectures, and generally discouraged them from seeking him out with their research problems. But it was typical of Nash, who had gone to see Einstein the year before with the germ of an idea.

Von Neumann was sitting at an enormous desk, looking more like a prosperous bank president than an academic in his expensive three-piece suit, silk tie, and jaunty pocket handkerchief.  He had the preoccupied air of a busy executive. At the time, he was holding a dozen consultancies, "arguing the ear off Robert Oppenheimer" over the development of the H-bomb, and overseeing the construction and programming of two prototype computers. He gestured Nash to sit down. He knew who Nash was, of course, but seemed a bit puzzled by his visit.

He listened carefully, with his head cocked slightly to one side and his fingers tapping. Nash started to describe the proof he had in mind for an equilibrium in games of more than two players. But before he had gotten out more than a few disjointed sentences, von Neumann interrupted, jumped ahead to the yet unstated conclusion of Nash's argument, and said abruptly, "That's trivial, you know. That's just a fixed point theorem." 
See also What use is game theory? Compare the excerpt below about Nash's Embedding Theorem (also of interest: Theorem proving machines).
A Beautiful Mind: Nash's theorem stated that any kind of surface that embodied a special notion of smoothness can actually be embedded in Euclidean space. He showed that you could fold the manifold like a silk handkerchief, without distorting it. Nobody would have expected Nash's theorem to be true. In fact, everyone would have expected it to be false. "It showed incredible originality," said Mikhail Gromov, the geometer whose book Partial Differential Relations builds on Nash's work. He went on:
Many of us have the power to develop existing ideas. We follow paths prepared by others. But most of us could never produce anything comparable to what Nash produced. It's like lightning striking. Psychologically the barrier he broke is absolutely fantastic. He has completely changed the perspective on partial differential equations. There has been some tendency in recent decades to move from harmony to chaos. Nash says chaos is just around the corner. 
John Conway, the Princeton mathematician who discovered surreal numbers and invented the game of Life, called Nash's result "one of the most important pieces of mathematical analysis in this century."
In writing this post, I googled "a beautiful mind" to find a link to the Amazon page. I was shocked to find a news article about the death of John Nash and his wife Alicia (both are in the photo above) yesterday in a car accident! May they rest in peace.

Sunday, April 19, 2015

Ulam on physical intuition and visualization


The picture above is of von Neumann, Feynman, and Ulam. More Ulam. See also the nature of intuition and intuition and the two brains.
Adventures of a Mathematician: (p.147-148) ... the main ability to have was a visual, and also an almost tactile, way to imagine the physical situations, rather than a merely logical picture of the problems.

The feeling for problems in physics is quite different from purely theoretical mathematical thinking. It is hard to describe the kind of imagination that enables one to guess at or gauge the behavior of physical phenomena. Very few mathematicians seem to possess it to any great degree. Johnny [vN], for example, did not have to any extent the intuitive common sense and "gut" feeling or penchant for guessing what happens in given physical situations. His memory was mainly auditory, rather than visual.

Another thing that seems necessary is the knowledge of a dozen or so physical constants, not merely of their numerical value, but a real feeling for their relative orders of magnitude and interrelations, and, so to speak, an instinctive ability to "estimate."

I knew, of course, the values of constants like the velocity of light and maybe three or four other fundamental constants—the Planck constant h, a gas constant R, etc. Very soon I discovered that if one gets a feeling for no more than a dozen other radiation and nuclear constants, one can imagine the subatomic world almost tangibly, and manipulate the picture dimensionally and qualitatively, before calculating more precise relationships.

Most of the physics at Los Alamos could be reduced to the study of assemblies of particles interacting with each other, hitting each other, scattering, sometimes giving rise to new particles. Strangely enough, the actual working problems did not involve much of the mathematical apparatus of quantum theory although it lay at the base of the phenomena, but rather dynamics of a more classical kind—kinematics, statistical mechanics, large-scale motion problems, hydrodynamics, behavior of radiation, and the like. In fact, compared to quantum theory the project work was like applied mathematics as compared with abstract mathematics. If one is good at solving differential equations or using asymptotic series, one need not necessarily know the foundations of function space language. It is needed for a more fundamental understanding, of course. In the same way, quantum theory is necessary in many instances to explain the data and to explain the values of cross sections. But it was not crucial, once one understood the ideas and then the facts of events involving neutrons reacting with other nuclei.
This "dynamics of a more classical kind" did not require intuition for entanglement or high dimensional Hilbert spaces. But see von Neumann and the foundations of quantum statistical mechanics for examples of the latter.

Thursday, June 26, 2014

Theoreticians as Professional Outsiders

The book also contains essays on Schrodinger, Fisher, Pauling, George Price, and Rashevsky.
Theoreticians as Professional Outsiders: The Modeling Strategies of John von Neumann and Norbert Wiener (Ehud Lamm in Biology Outside the Box: Boundary Crossers and Innovation in Biology, Oren Harman and Michael R. Dietrich (eds.))

Both von Neumann and Wiener were outsiders to biology. Both were inspired by biology and both proposed models and generalizations that proved inspirational for biologists. Around the same time in the 1940s von Neumann developed the notion of self reproducing automata and Wiener suggested an explication of teleology using the notion of negative feedback. These efforts were similar in spirit. Both von Neumann and Wiener used mathematical ideas to attack foundational issues in biology, and the concepts they articulated had lasting effect. But there were significant differences as well. Von Neumann presented a how-possibly model, which sparked interest by mathematicians and computer scientists, while Wiener collaborated more directly with biologists, and his proposal influenced the philosophy of biology. The two cases illustrate different strategies by which mathematicians, the “professional outsiders” of science, can choose to guide their engagement with biological questions and with the biological community, and illustrate different kinds of generalizations that mathematization can contribute to biology. The different strategies employed by von Neumann and Wiener and the types of models they constructed may have affected the fate of von Neumann’s and Wiener’s ideas – as well as the reputation, in biology, of von Neumann and Wiener themselves.
For and Against theory in biology:
... E.B. Wilson articulated the reserved attitude of biologists towards uninvited theoreticians. Wilson’s remarks at the Cold Spring Harbor Symposia on Quantitative Biology in 1934 were ostensibly about the “Mathematics of Growth” but it is impossible to fail to notice their tone and true scope. Wilson suggested orienting the discussion around five axioms or “platitudes” as he called them. The first two are probably enough to get his point across. Axiom 1 states that “science need not be mathematical,” and if that’s not bad enough, axiom 2 solidifies the reserved attitude towards mathematization by stating that “simply because a subject is mathematical it need not therefore be scientific.”

... While the idea of self-reproduction seems incredible, and some might even have thought it to involve a self-contradiction, with objects creating something as complex as they are themselves, von Neumann’s solution to the problem of self-reproduction was remarkably simple. It is based on two operations: (1) constructing an object according to a list of instructions, and (2) copying a list of instructions as is ... This procedure is trivial for anyone computer-literate to understand; it was a remarkable theoretical result in 1948. What, however, does it tell us about biology? It is often observed that von Neumann’s explanation, which involves treating the genetic material both as instructions and as data that is copied as-is, is analogous to the reproduction of cells, since DNA, the analogue of the instruction list, is passively replicated. Von Neumann compared the construction instructions that direct the automaton to genes, noting that genes probably do not constitute instructions fully specifying the construction of the objects their presence stimulates. He warned that genes are probably only general pointers or cues that affect development, a warning that alas did not curtail the “genetic program” metaphor that became dominant in years to come.

Von Neumann noted that his model explained how mutations that do not affect self- replication are possible. If the instruction list specifies not only the self-replicating automaton but also an additional structure, this structure will also be replicated. ...

... As Claude Shannon put it in a 1958 review of von Neumann’s contributions to automata theory, and specifically self-reproducing automata:

If reality is copied too closely in the model we have to deal with all of the complexity of nature, much of which is not particularly relevant to the self-reproducing question. However, by simplifying too much, the structure becomes so abstract and simplified that the problem is almost trivial and the solution is un-impressive with regard to solving the philosophical point that is involved. In one place, after a lengthy discussion of the difficulties of formulating the problem satisfactorily, von Neumann remarks: "I do not want to be seriously bothered with the objection that (a) everybody knows that automata can reproduce themselves (b) everybody knows that they cannot."
See also On Crick and Watson and Reliable Organization of Unreliable Components

Sunday, December 02, 2012

John Von Neumann Documentary

Thanks to a reader for these links. YouTube is amazing!






Teller on von Neumann's enjoyment of thinking, and his horror at the breakdown of this ability due to his terminal cancer. (Also mention's that vN's relation to "the rest of us" was similar to talking to a 3 year old :-)  "Only he was fully awake"

Blog Archive

Labels