Unedited copy, except my highlights in yellow.
Where Are We in the Multiverse?
MARCH 17, 2014
There are two avenues from modern physics to the belief that the universe we see around us is not all there is, but is instead one of infinitely many like it. The first is inflationary cosmology; the second is quantum mechanics. Though very different, these two multiverse models share two features: first, they both posit objective physical probabilities that tell us how likely we are to be in some portion of the multiverse rather than telling us how likely the multiverse is to be some way or another; and second, they both have a problem with prediction and confirmation. I’ll discuss the relationship between self-locating probability and confirmation in these theories.
Our first avenue to the multiverse is cosmological: many inflationary models predict that the early inflation of our universe is eternal, continuously spinning off bubble universes in a sea of expansion. This leads to infinitely many distinct universes, each with its own fundamental constants and ratio of dark energy to dark matter and ordinary matter (for more on this see [6]).
The second comes from one interpretation of quantum mechanics. The Everett, or many worlds, interpretation holds that the world is completely characterized by a universal quantum wavefunction which never collapses. After any experiment, the wavefunction—and the world—splits, with a branch corresponding to every possible measurement outcome. So, for example, if I am measuring the spin of an electron, after my measurement there are two descendants of me: one who measured spin up, and one who measured spin down, each living in his own local universe. We should note that this is just one—very controversial—way of understanding quantum mechanics.
To keep things simple we’ll call the totality of all that there is ‘the multiverse’ and smaller, isolated, universe-like regions ‘local universes’. We typically take these two theories as providing us with very different multiverses: the cosmological multiverse is a collection of matter-filled regions (local universes) separated by infinitely expanding space, whereas the many worlds model of quantum mechanics gives us one wavefunction in a superposition of states, each of which corresponds to a local universe. But some cosmologists think these might be linked: we won’t discuss this here, but see [1] and [2] if you’re interested.
These two theories share a problem: they are apparently unfalsifiable. Since they predict that all measurement outcomes occur in some local universe, there are no results which are incompatible with either theory. Even if falsifiability is not the arbiter of scientific worthiness, the problem remains. We gain evidence for a theory by testing a its predictions; but since these theories claim that every experimental outcome occurs somewhere they don’t seem to predict anything about any particular experiment. So it is difficult to see how any experimental result or observation could possibly count as evidence for either.
In the case of multiverse expansion models, the model predicts that every possible ratio of matter to dark energy exists in some universe; it predicts that there is some universe for every way of setting of the (at least some of) the fundamental constants, and for every distribution of matter (this isn’t universally agreed upon, although consensus is growing; for an overview see [3], and for dissent from the inflationary paradigm see [5]). But we take features of our local universe—such as its vacuum energy or the uniformity of the microwave background radiation—to be evidence for the theory. How is this possible if the theory predicts that there are infinitely universes without these features?
In the case of Many Worlds Quantum Mechanics, the theory predicts that every experimental outcome occurs on some branch. But we take the results we observe—such as the frequency of spin-up results in a Stern-Gerlach experiment–to be evidence for the theory. How is this possible if we know that the theory predicts infinitely many branches with different frequencies?
Call this the evidence challenge. The answer given by both theories is roughly the same: although we know that, for each experiment, every possible result shows up somewhere, we can still have a probability that we are in some region of the multiverse. We get a probability that our area is like this rather than like that.
What’s weird about this is that this is not a probability for the multiverse to develop in some way. We know exactly how the multiverse will develop. Instead, this probability that we are in some part of it rather than another. It’s essentially self-locating or indexical. (Philosophers call this sort of probability de se). We know what the multiverse is like with certainty; our predictions, and so our evidence, are predictions about where we are instead of predictions about what happens.
Confirmation, on this model, involves two steps: First, we gather information E about our local universe. We then assume that we are in a typical part of the multiverse—a region that’s like most. Our evidence E confirms the theory if and only if the theory says that E holds in most places. To show that a theory can be confirmed, then, we must show that the theory gives us a natural measure which can tell us what most universes are like.
But this talk about most is a distraction. We know that there are infinitely many of each type of local universe? Coming up with the right measure of most is the cosmological measure problem. There is not yet a consensus about what the correct measure is. Without an agreed-upon measure on the table, it’s hard to tell whether the measure in question could give evidence for the theory. Most measures involve finding a preferred ordering of observations, and cutting off these observations before this sequence diverge, and then taking the limiting relative frequency. (For a recent overview of the options, see [4].) We then assume that we are equally likely—according to the measure—to be any observer. This assumption is called the typicality or Copernicanassumption.
Proponents of many worlds quantum mechanics agree that the natural measure over branches is the Born rule—which tells us that the likelyhood we’re in some local universe is proportional to that local universe’s amplitude in the universal wave function. More branches are like ours if our branch has a high amplitude. The trick, for many worlds, is not figuring out what the correct measure is. It’s in justifying using this measure to gain evidence for the theory. Most justifications go via decision theory; they argue that an agent in a many world universe will use the born rule to weight their decisions. To their opponents, these justifications seem too pragmatic. (For a thorough exploration of this strategy, see [7] or [8].)
There’s a knee-jerk reaction to all of this, which is to reject the idea that objective physical probabilities can be self-locating. Physics should tell us how likely the universe is to have some property, or how likely things are to develop in a certain way, or how likely an experimental outcome is. It’s supposed to give us probabilities which are about the world.
This seems to be a requirement if these objective probabilities are going to feature in explanations of our surroundings, which physical probabilities surely do. Self-locating probabilities don’t seem like the sort of thing that can do this. How can the likelihood that I’m over here, rather than over there, explain why this electron is spin up? How can we explain the structure of our universe by citing the likelihood that we end up here rather than somewhere else?
And one reaction to this knee-jerk is to reject an underlying intuition about explanations and physical probability—that the probabilities must guide the world, and that explaining A requires showing how A was produced.. Doing so requires us to think of physical probabilities as deeply related to us: on this view, physical probabilities are just the best way of encoding information about what we should expect. Explanation is also closely connected to telling us what we ought to have expected, or showing how what we observe is part of a unified system. This is a revisionary take on physical probability, but one that many of us might already accept.
But even if we accept this us-directed notion of physical probability, both theories still have to justify the inference procedure described above. For one might be doubtful that any inferences of the sort described are justified. Doing so requires us to rely on a typicality principle: that our local universe is like most; to make a prediction we must assume that our locality is like most consistent with our evidence. But what could justify this principle? Perhaps, like Hume’s Principle of the Uniformity of Nature (PUN), this is something we must accept to do science, but cannot justify. Still, a proponent of this sort of reasoning now has two basic epistemic assumptions: PUN and Typicality.
Comments welcome!
References:
[1] Aguirre, Anthony, and Max Tegmark (2012). “Born in an Infinite Universe: a Cosmological Interpretation of Quantum Mechanics.” arXiv:1008.1066v2
[2] Bousso, Raphael, and Leonard Susskind (2011) “The Multiverse Interpretaton of Quantum Mechanics.” arXiv:1105.3796v3
[3] Davies, Paul C. W. (2004) “Multiverse Cosmological Models.” arXiv:astro-ph/0403047
[4] Freivogel, Ben (2011). “Making Predictions in the Multiverse.” arXiv:1105.0244v2
[5] Ijjas, Anna, Paul Steinhardt, and Abraham Loeb (2013). “Inflationary Paradigm in Trouble After Planck 2013.” arXiv:1304.2785v2
[6] Susskind, Leonard (2003). “The Anthropic Landscape of String Theory.” arXiv:hep-th/0302219
[7] Wallace, David (2005). “Quantum Probability from Subjective Likelihood: Improving on Deutsch’s Proof of the Probability Rule.” arXiv:quant-ph/0312157v2
[8] Wallace, David (2012). The Emergent Multiverse. Oxford: Oxford University Press.
No comments:
Post a Comment