The Age of Computing: A Personal Memoir
In the history of modern technology, computer science must figure as an extraordinary chapter, and not only because of the remarkable speed of its development. It is unfortunate, however, that the word "science" has been widely used to designate enterprises that more properly belong to the domain of engineering.
"Computer science" is a glaring misnomer, as are "information science," "communication science," and other questionable "sciences." The awe and respect which science enjoys and which engineering is denied is inexplicable, at least to one who sees the situation from the other side. ...
Since World War II, the discoveries that have changed the world were not made so much in lofty halls of theoretical physics as in the less-noticed labs of engineering and experimental physics.
The roles of pure and applied science have been reversed; they are no longer what they were in the golden age of physics, in the age of Einstein, Schrodinger, Fermi and Dirac. Readers of Scientific American, nourished on the Wellsian image of science, will recoil from even entertaining the idea that the age of physical "principles" may be over.
The laws of Newtonian mechanics, quantum mechanics and quantum electrodynamics were the last in a long and noble line that appears to have somewhat dried up in the last 50 years. As experimental devices (especially measuring devices) are becoming infinitely more precise and reliable, the wealth and sheer mass of new and baffling raw data collected by experiment greatly exceeds the power of human reason to explain them.
Physical theory has failed in recent decades to provide a theoretical underpinning for a world which increasingly appears as the work of some seemingly mischievous demiurge. The failure of reason to explain fact is also apparent in the life sciences, where "theories" (of the kind that physics has led us to expect) do not exist; many are doubtful that this kind of scientific explanation will ever be successful in explaining the secrets of life.
...Historians of science have always had a soft spot for the history of theoretical physics. The great theoretical advances of this century -- relativity and quantum mechanics -- have been documented in fascinating historical accounts that have captivated the mind of the cultivated public.
There are no comparable studies of the relations between science and engineering. Breaking with the tradition of the Fachidiot, theoretical physicists have bestowed their romantic autobiographies on the world, portraying themselves as the high priests of the reigning cult.
By their less than wholly objective accounts of the development of physics, historians have conspired to propagate the myth of science as being essentially theoretical physics. Though the myth no longer described scientific reality 50 years ago, historians pretended that all was well, that nothing had changed since the old heroic days of Einstein and his generation.
There were a few dissenters, however, such as the late Stanislaw Ulam, who used to make himself obnoxious by proclaiming that Enrico Fermi was "the last physicist." He and others who proclaimed such a possibility were prudently ignored.
Physicists did what they could to keep the myth alive. With impeccable chutzpah, they went on promulgating new "laws of nature" and carefully imitated their masters of another age. With dismaying inevitability, many of these latter-day "laws" have been exposed as quasi-mathematical embellishments, devoid of great physical or scientific significance.
Historians of science have seen fit to ignore the history of the great discoveries in applied physics, engineering and computer science, where real scientific progress is nowadays to be found. Computer science in particular has changed and continues to change the face of the world more thoroughly and more drastically than did any of the great discoveries in theoretical physics.
... Although the basic rules for the computation of reliability were long known, it took several years during and immediately after World War II for the importance of the concept of reliability to be explicitly recognized and dealt with. Only then did reliability computation become an essential feature in computer design.
The late Richard Feynman was one of the first to realize the centrality of reliability considerations in all applied scientific work. In the early days of the Manhattan Project in Los Alamos (in 1943 and early 1944), he tested the reliability of his first program in a dramatic fashion, setting up a day-long contest between human operators working with hand-operated calculators and the first electromechanical IBM machines.
At first, human operators showed an advantage over the electromechanical computers; as time wore on, however, the women who worked with the calculators became visibly tired and began to make small errors. Feynman's program on the electromechanical machine kept working. The electromechanical computers won out by virtue of their reliability.
Feynman soon came to realize that reliable machines in perfect working order were far more useful than much of what passed for theoretical work in physics, and he loudly stated that conviction. His supervisor, Hans Bethe -- the head of T-Division (T for theory) at the time and a physicist steeped in theory -- at first paid no attention to him.
At the beginning of the Manhattan project, only about a dozen or so hand-operated machines were available in Los Alamos; they regularly broke down, thereby slowing scientific work. In order to convince Bethe of the importance of reliable computation, Feynman recruited me to help him improve the performance of the hand-operated desk calculators, avoiding the week-long delays in shipping them to San Diego for repairs. We spent hours fixing the small wheels until they were in perfect order. Bethe, visibly concerned when he learned that we had taken time off from our physics research to do these repairs, finally saw that having the desk calculators in good working order was as essential to the Manhattan Project as the fundamental physics.
Throughout his career, Feynman kept returning to the problem of the synthesis of reliable computers. Toward the end of his life, he gave a remarkable address at the 40th anniversary of the Los Alamos Laboratory where he sketched a reliability theory based on thermodynamical analogies.
In contrast to Bethe, John von Neumann very quickly realized the importance of reliability in the design of computers. It is no exaggeration to say that von Neumann had some familiarity (in the 1950s) with all the major ideas that have since proved crucial in the development of supercomputers.
...The first large-scale electronic computer to be built, the one that may be said to inaugurate the computer age, was the ENIAC. It was built at the Moore School of the University of Pennsylvania by an engineer and a physicist -- Presper Eckert and John Mauchly. Their idea, trivial by the standards of our day, was a revolutionary development when completed in 1945.
At the time, all electromechanical calculators were built exclusively to perform ordinary arithmetic operations. Any computational scheme involving several operations in a series or in parallel had to be planned separately by the user. Mauchly realized that if a computer could count, then it could do finite-difference schemes for the approximate solution of differential equations. It occurred to him that such schemes might be implemented directly on an electronic computer, an unheard of idea at the time.
...Of all the oddly named computers, the MANIAC's name turned out to be the most unfortunate: George Gamow was instrumental in rendering this and other computer names ridiculous when he dubbed the MANIAC "Metropolis And von Neumann Install Awful Computer."
Fermi and Teller were the first hackers. Teller would spend his weekends at the laboratory playing with the machine. Fermi insisted on doing all the menial work himself, down to the least details, to the awed amazement of the professional programmers. He instinctively knew the right physical problems that the MANIAC could successfully handle.
His greatest success was the discovery of the strange behavior of nonlinear systems arising from coupled nonlinear oscillators. The MANIAC was a large enough machine to allow the programming of potentials with cubic and even quartic terms. Together with John Pasta and Stanislaw Ulam, he programmed the evolution of a mechanical system consisting of a large number of such coupled oscillators. His idea was to investigate the time required for the system to reach a steady state of equidistribution of energy. By accident one day, they let the program run long after the steady state had been reached. When they realized their oversight and came back to the computer room, they noticed that the system, after remaining in the steady state for a while, had then departed from it, and reverted to the initial distribution of energy (to within 2 percent).
The results were published in what was to be the last paper Fermi published before he died. Fermi believed this computer-simulated discovery to be his greatest contribution to science. It is certainly the first major scientific discovery made by computer, and it is not fully understood to this day (though it has spawned some beautiful ideas). ...
[Metropolis was not up to date on ergodic theory, integrability and chaos?]
Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Sunday, January 18, 2009
The Age of Computing
A remarkable essay by Nick Metropolis, which appeared in Daedalus (winter 1992). Originally trained as an experimental physicist, Metropolis played an important role in the Manhattan project and was a pioneer in electronic computation. The Metropolis Algorithm, used in Monte Carlo simulation, bears his name.
theoretical physics: engineering ::
ReplyDeletefashion show: power loom
I think he does know about these things. Everyone believes it happens because FPU is a discrete version of the KdV equation and by KAM theory some invariant tori will persist when perturbed. However, a full proof that this is true didn't exist as late as the early 1990's, which was the last time I paid direct attention to the topic. These small divisor type perturbative calculations in high dimensions are notoriously difficult to do.
ReplyDeleteI just read the full essay and I think Metropolis doesn't make a distinction between perhaps what should be called
ReplyDeleteScientific Computation, which is what he contributed to and Computer Science, which is a field of mathematics. The latter has very deep and nontrivial ideas that the general public has no clue about, especially in 1992. I would also say that the Monte Carlo has been put on solid ground if you take a Bayesian perspective. It is an approximate way to map out the posterior probability.
The first occurrence of Metropolis's name in your post could use a capital 'M'.
ReplyDeleteThanks! How did I miss that?
ReplyDeleteCarson, thanks for the clarification.
It's ironic that Metropolis' essay repudiates the dismissive first comment (12:15 AM) by the curmudgeonly Anonymous, whom we heard from a few posts ago, even as it seems to echo it. Few people have integrated the theoretical and the practical—call it good engineering sense—as effectively as Richard Feynman and Enrico Fermi.
ReplyDeleteFew people have integrated the theoretical and the practical—call it good engineering sense—as effectively as Richard Feynman and Enrico Fermi.
ReplyDeleteI tell pretty much anyone who will listen that if you can not successfully navigate from the concrete to the abstract and back again, then you really don't understand what you are talking about. That's gibberish to most people, though.
A meeting of the next generation ('70s versus the '40s and'50s)...
ReplyDeleteI remember having arguments about computer science versus computer engineering in the 1970s. In 1975 the MIT EE department chose to become Electrical Engineering and Computer Science, sort of a compromise. There is actually a fair bit of computer science, but it is computer engineering that has been so incredibly productive these past decades. This pattern is not new.
ReplyDeleteLook at the golden age of heat engines which led to the supporting science of thermodynamics. Steam engines rebuilt the world, transportation, and the means of production, even as scientists struggled to understand how these machines worked. You can find lots of fields in which engineering led the way and science followed.
Right now the big challenge in the life sciences is in managing all the information. My impression is that we are going to see some science pulled out of this, though it may not look like theoretical physics. Newton's breakthrough with gravity revolved around stuff that could be computed in his day. We can compute a lot more nowadays, so our equivalent formulas and equations are going to have to take that into account.