The excerpt below the abstract is from the final section of the paper (apologies for the latex remnants). In that section I discuss a rather strong implication of quantum mechanics. Simple entropic or information theoretic arguments, together with standard big bang cosmology, imply that essentially all the detailed aspects of the world around us (the arrangement of galaxies in clusters, electrons in stars, leaves on trees, or books on bookshelves) are random consequences of quantum outcomes. There is simply not enough information in the initial conditions to specify all of these things. Unless their variability is illusory, it must result from quantum randomness. Very little about the universe today is predictable, even with perfect knowledge of the initial conditions and subsequent dynamical evolution.
hep-th > arXiv:0704.1154
Information, information processing and gravity
Abstract: I discuss fundamental limits placed on information and information processing by gravity. Such limits arise because both information and its processing require energy, while gravitational collapse (formation of a horizon or black hole) restricts the amount of energy allowed in a finite region. Specifically, I use a criterion for gravitational collapse called the hoop conjecture. Once the hoop conjecture is assumed a number of results can be obtained directly: the existence of a fundamental uncertainty in spatial distance of order the Planck length, bounds on information (entropy) in a finite region, and a bound on the rate of information processing in a finite region. In the final section I discuss some cosmological issues related to the total amount of information in the universe, and note that almost all detailed aspects of the late universe are determined by the randomness of quantum outcomes. This paper is based on a talk presented at a 2007 Bellairs Research Institute (McGill University) workshop on black holes and quantum information.
How much information in the universe?
In this final section we ask how much information is necessary to specify the current state of the universe, and where did it come from?
There is convincing observational evidence for the big bang model of cosmology, and specifically for the fact that the universe is and has been expanding. In a radiation-dominated universe, the FRW scale factor grows as $R(t) \sim t^{1/2}$, where $t$ is the comoving cosmological time. From this, it is clear that our universe evolved from a much smaller volume at early times. Indeed, in inflationary cosmology (Fig.~\ref{inflation}) the visible universe results from an initial patch which is exponentially smaller than our current horizon volume. The corresponding ratio of entropies is similarly gigantic, meaning that there is much more information in the universe today than in the small primordial patch from which it originated. Therefore, the set of possible early universe initial conditions is much, much smaller than the set of possible late time universes. A mapping between all the detailed rearrangements or modifications of the universe today and the set of possible initial data is many to one, not one to one \cite{rnd}.
Thus, the richness and variability of the universe we inhabit cannot be attributed to the range of initial conditions. The fact that I am typing this on a sunny day, or that our planet has a single moon, or that the books on my office shelves have their current arrangement, was not determined by big bang initial data.
How, then, do the richness and variability of our world arise? The answer is quantum randomness -- the randomness inherent in measurements of quantum outcomes.
Imagine an ensemble $\Psi$ of $n$ qubits, each prepared in an identical state $\psi$. Now imagine that each qubit is measured, with a resulting spin up ($+$) or spin down ($-$) result. There are $2^n$ possible records, or histories, of this measurement. This is an exponentially large set of outcomes; among them are all possible $n$-bit strings, including every $n$-bit work of literature it is possible to write! Although the initial state $\Psi$ contained very little information (essentially, only a single qubit of information, since each spin is in an identical state), $n$ bits of classical information are required to specify {\it which} of the $2^n$ outcomes is observed in a particular universe. For $n \rightarrow \infty$ the set of possible records is arbitrarily rich and varied despite the simplicity of initial state $\Psi$.
In the same way, given an initial quantum state $\Psi$ describing the primordial patch of the big bang from which our horizon volume evolved, one must still know the outcomes of a large number of quantum measurements in order to specify the particulars of the universe today. From a many worlds perspective, one must specify all the decoherent outcomes to indicate a particular branch of the wavefunction -- a staggering amount of information. Equivalently, from the traditional Copenhagen perspective, each quantum measurement injects a bit (or more) of truly random information into our universe, and this randomness accounts for its variability.
The most familiar cosmological quantum randomness comes from fluctuations of the inflaton field, which determine the spectrum of primordial energy density fluctuations. It is these density fluctuations that determine the locations of galaxies, stars and planets today. However, from entropic or information theoretic considerations we readily deduce that essentially {\it every} detailed aspect of our universe (beyond the fundamental Lagrangian and some general features of our spacetime and its contents) is a consequence of quantum fluctuations!
No comments:
Post a Comment