Quote:
Originally Posted by thylacine
Why is it that you can "consider the probabilistic model as simply capturing statistical properties of various unknown factors that are influencing the dynamics"? Why is this so effective. Are there any deep reason, or even deep theorems, why this should be so.
The theorems of probability theory speak only to the mathematics, which is essentially independent of whatever interpretation you wish to attach to your mathematical model. But let me illustrate what I said before with the simple example of a (potentially biased) coin flip.
The standard model would be the probability space (Ω,F,P), where Ω = {0,1} (the sample space), F = {∅,{0},{1},Ω} (the
σ-algebra of events), and P (the probability measure) is given by P(∅) = 0, P({0}) = p, P({1}) = 1 - p, and P(Ω) = 1, where p ∈ [0,1] is some fixed real number. The value 0 corresponds to heads, the value 1 corresponds to tails, and we understand p to denote the probability of heads.
Now suppose we take the position that the actual coin flip is deterministic, governed by the laws of physics. If we knew the exact initial conditions, we could compute and infallibly predict the result of the coin flip. The randomness comes only from the fact that we do not know the exact initial conditions and/or we do not know the relevant laws of physics in their entirety. How might we connect this position with the model given above?
One possibility is this. There are n relevant, unknown initial conditions, x_1,...,x_n. Each x_j is an element of some set E_j. The (deterministic) dynamics of the coin flip are represented by a function f, whose domain is the product space
E_1 x ... x E_n,
and whose range is simply {H,T}. If we knew the initial conditions, (x_1,...,x_n), and we knew the function f, then the result of the coin flip would simply be f(x_1,...,x_n).
To connect this model with the original probability space, we simply identify the element 0 ∈ Ω with the set f^{-1}(H), and the element 1 ∈ Ω with the set f^{-1}(T). In this way, our ignorance about the initial conditions, (x_1,...,x_n), and the function f, have all been transferred to the unknown outcome in the probability space,
ω ∈ Ω. The case
ω = 0 corresponds to the possibility that the unknown initial conditions satisfy f(x_1,...,x_n) = H, the case
ω = 1 corresponds to the alternative.
There is no deep philosophy here and nothing so far is at all controversial. The controversy begins when we ask questions about p. (What value, if any, should/can we assign to p? What, if anything, does the value of p correspond to in the real world?) But up until now, everything is completely straightforward. So the very mathematical structure of probability theory seems to fit quite naturally with the concept of hidden variables and unknown factors.
Quote:
Originally Posted by thylacine
FWIW, I don't believe in MWI, just as I don't believe in analagous "splitting realities" interpretations in other probabilistic models.
Nevertheless there are non-trivial questions about why it is that before a coin-flip there is 50% chance of heads, but afterwards it is 0% or 100%.
I agree that this issue raises non-trivial philosophical difficulties. The difference between this and MWI, in my opinion, is that several (different) answers have been seriously developed. People still argue over which are the "right" answers. But the individual interpretations have stood the test of time, they consistently fit within our mathematical framework of probability theory, and they have not so far exhibited any internal inconsistencies. I do not believe the same can be said for attempts to connect probabilistic interpretations with MWI.
Incidentally, in my opinion it is not necessarily true that "afterwards it is 0% or 100%." For example, suppose I shake a pair of dice in a closed, opaque cup. I then set the cup on the table without opening it. I am perfectly happy with the claim that the probability the dice show a sum of 5 is 1/9, even though the dice have already been rolled.