Sleeping Beauty Problem
If I understand you, you would be describing a different experiment where waking on Heads-Tuesday Beauty is still in the New-Experiment but not in the Old part of the New Experiment. I don't think this changes anything anymore than if you added another 365 days outside the Old experiment for heads and 573 days outside the Old experiment for tails. When she awakens within the Old part of whatever New experiment you cook up, she reasons as before. After all, in the original Old Experiment, when she awoke she also knew she was not waking outside the Old Experiment.
PairTheBoard
PairTheBoard
Suppose a fair coin is flipped. If heads one Black ball is put in a bag. If tails two White balls are put in the bag. Let X be a random ball produced by one run of the experiment. What is the probability X is Black? I can easily describe a natural way to define a "random ball produced by the experiment" by letting the experiment run then randomly picking a ball from the bag. By that extension of the experiment the probability is 1/2 that the "random ball" is Black. I see no natural way to describe how one run of the experiment produces a "random ball" with 1/3 probability for Black.
PairTheBoard
Yea that's a good explanation and I believe one I've given in past threads. The thing that bothers me about it now is that it seems to be coming from a perspective that allows E[Y] to be understood as a function of X but just not equal to 1.25X in this case because of equivocation on X. An unsophisticated reader might think, oh, so since switching doesn't matter then E[Y] as a function of X must just be X, ie. E[Y] = X. Of course there is a kind of expectation for Y as a function of X, namely the conditional expectation E[Y|X] but that's not equal to X either. So I'm now inclined to prefer the explanation that X and Y are both nonconstant random variables so E[Y] is a fixed number while 1.25X has at least 2 possible values. Maybe explaining the equivocation is more to the point though.
The thing is, X and Y have already been defined as the Dollar amounts in the First and Second Envelopes. If we were just converting units from say Dollars to Euros at the fixed rate .75 Euros per Dollar then I'd protest using Y for the Euro equivalent in the Second Envelope and prefer defining a Y' = .75Y for that purpose. However, if Y were used for both and the context was always clear I guess I could live with it.
But we're doing something very different here and something I'm finding very unusual. We're converting from a fixed Dollar unit to a Random Unit. Maybe this comes natural to those of you who work with Change of Measure stuff all the time. But I always found that very difficult and have long forgotten what little I was ever able to learn of it. So when we say the random variable X is now the Unit and the random variable Y is now a random variable expressed in terms of the random Unit X, I'm afraid I need more precise notation and definitions for how that's to be done than just saying, "in that case X would be 1" and "Y would already be in units of X", so we can now say E[Y] = 1.25 (1).
I don't think Y should be used both for the pre random unit conversion and the post random unit conversion. And I think the post random unit conversion of X should involve notation showing where it comes from rather than just saying it's 1.
The thing is, X and Y have already been defined as the Dollar amounts in the First and Second Envelopes. If we were just converting units from say Dollars to Euros at the fixed rate .75 Euros per Dollar then I'd protest using Y for the Euro equivalent in the Second Envelope and prefer defining a Y' = .75Y for that purpose. However, if Y were used for both and the context was always clear I guess I could live with it.
But we're doing something very different here and something I'm finding very unusual. We're converting from a fixed Dollar unit to a Random Unit. Maybe this comes natural to those of you who work with Change of Measure stuff all the time. But I always found that very difficult and have long forgotten what little I was ever able to learn of it. So when we say the random variable X is now the Unit and the random variable Y is now a random variable expressed in terms of the random Unit X, I'm afraid I need more precise notation and definitions for how that's to be done than just saying, "in that case X would be 1" and "Y would already be in units of X", so we can now say E[Y] = 1.25 (1).
I don't think Y should be used both for the pre random unit conversion and the post random unit conversion. And I think the post random unit conversion of X should involve notation showing where it comes from rather than just saying it's 1.
E(Y/X) = 1.25
right? How do you justify that?
The best notation seems like a Jason question. If you gain any more insights, let me know. Privately is probably best because I don't think anyone else cares, and this is really not germane to SB.
I think you are wrong. Expected value concept applies to functions, the same way integration applies to functions. When you see "expected value of a constant" it's just a shortcut for "expected value of random variable which is constant". It usually doesn't matter at all anyway but could create confusion.
f(x) = 2
Can you integrate f(x)? Yes? So you can integrate a constant? Now I am in no way saying that a function IS a constant. A function is a set of ordered pairs. Earlier I stated that a constant is a random variable who's value is the same for all outcomes. Since a random variable is a function, you'd think you could implicate me in saying that a constant is a function. But it depends on what the definition of "is" is. When math is translated to English, equals is often translated to "is", as in "f(x) is 2". That means that the function takes on the value of 2 for all values of x. It doesn't mean that a function is identically a constant. Sometimes the mathematical symbol for identical is used to distinghuish these; it's an equal sign with 3 bars. pi is identically 3.1415.... But f(x) merely equals 2.
Is P(2 = 2) = 1?
Is P(2 <> 2) = 0?
If so then E(2) = 2.
If this is a problem for you, then just define a random variable who's value is 2 for all outcomes.
I think you confused it with:
E(X+2) = E(X) + 2 or E(X+2) = E(f(x)) + E(g(x)) where:
f(x) is random variable X and g(x) is a function which is 5 for every x.
E(X+2) = E(X) + 2 or E(X+2) = E(f(x)) + E(g(x)) where:
f(x) is random variable X and g(x) is a function which is 5 for every x.
If you try to write down formally the argument which produces "expected holding" in 2nd envelope as 1.25 of holding in 1st envelope you will arrive at either:
Meh. I think that if you try to write it down formally you will see that all that stuff with numereire doesn't solve anything.
E(Y/X) = 1.25 while E(Y) = E(X).
If you want to maximize dollars, use the second one. Both Aaron and PairTheBoard have given examples where it is appropriate to use the first one. Aaron with auction scrip, and PairTheBoard with loaves of bread. I show Aaron's again at the end of this post. I just brought it up as a parallel way to view the arbitrariness in probabilities that Jason1990 showed for SB.
You need underlying distribution of amount in envelopes to talk about EV unless you use amount which doesn't change (ie. smaller amount out of both envelopes).
E(Y/X) = 1.25
with no regard to a distribution, and this is not in dispute. Nor is it reasonable that we should need a distribution to determine this. We are only given that one envelope has an amount that is either twice or half the other with equal probabilities. I tell you that I picked 2 numbers where one is twice the other. You are 100% certain that I'm an honorable man who would never lie to you about this. Why should it matter what the magnitude of the 2 numbers are? I could just change the units and make the numbers whatever I want. We don't need a distribution because we are only interested in the value of the second envelope in terms of the first envelope. That's why we are working in units of the first envelope as a numeraire.
Suppose a very rich friend of yours organize 2 envelope game for you. Once in toy points and once in dollars. You open up first envelope and see 50.000 points, do you switch ?
In 2nd game you see 50.000 dollars, do you switch ?
Do you think those situations are different if your goal is to maximize amount of points/dollars in both cases ?
In 2nd game you see 50.000 dollars, do you switch ?
Do you think those situations are different if your goal is to maximize amount of points/dollars in both cases ?
Sorry Jeff. I was in no way trying to insult you. I know you're extremely intelligent and assume your math skills are very good.
But you're having very intelligent guys with lofty math degrees spout psychobabble at you (and at everyone else).
My point is that when "intelligent philosophers" try to make something out of nothing, people listen, and it does them a disservice to listen.
What I said is probably a bit condescending, but I promise none of that was meant to be directed at you. It's directed at the guys that can't just admit they are warping the problem drastically to manufacture a debate where their favorite topics are relevant.
But you're having very intelligent guys with lofty math degrees spout psychobabble at you (and at everyone else).
My point is that when "intelligent philosophers" try to make something out of nothing, people listen, and it does them a disservice to listen.
What I said is probably a bit condescending, but I promise none of that was meant to be directed at you. It's directed at the guys that can't just admit they are warping the problem drastically to manufacture a debate where their favorite topics are relevant.
I think I did that. With the experiment defined as the Small Envelope amount a random variable from some legitimate distribution (possibly a point mass) and the Large Envelope defined as the random variable 2*(Small Envelope) then the two envelopes are shuffled to yield two new random variables First Envelope and Second Envelope with each having equal probability of equaling the Small Envelope. X=First Envelope, Y=Second Envelope.
Then (Y/X) is a random variable with
P( Y/X = 2) = P( Y/X = .5) = 1/2
So the expectation of a random variable with those probabilities is well defined and can be computed by,
E[ Y/X ] = 1/2(2) + 1/2(.5) = 1.25
That's simple and straightforward probability.
If A is in units of seconds and we wanted to rewrite A in units of minutes with 60 seconds to a minute we would divide A by 60 and write,
A seconds = A/60 minutes
So generalizing to a random unit, If Y is in units of dollars and we want to rewrite Y in the random units of X with a random number of X dollars to a random unit of X then we would divide Y dollars by X dollars and write,
Y dollars = Y/X random units of X.
Then compute E[Y/X] = 1.25 (random units of X) as above.
That makes sense to me but I suspect there's an established mathematical formalism for doing something like this. It vaguely reminds me of something like a change of measure but I was always very weak on that concept.
I'm satisfied enough. I don't doubt Jason could improve on this treatment significantly. I suspect if he wanted to he could provide the rigorous mathematical formalism to AB's whole concept of numeraire which AB says has never been done to his knowledge.
Anyway, I agree. Thanks, and back to Sleeping Beauty.
PairTheBoard
Then (Y/X) is a random variable with
P( Y/X = 2) = P( Y/X = .5) = 1/2
So the expectation of a random variable with those probabilities is well defined and can be computed by,
E[ Y/X ] = 1/2(2) + 1/2(.5) = 1.25
That's simple and straightforward probability.
If A is in units of seconds and we wanted to rewrite A in units of minutes with 60 seconds to a minute we would divide A by 60 and write,
A seconds = A/60 minutes
So generalizing to a random unit, If Y is in units of dollars and we want to rewrite Y in the random units of X with a random number of X dollars to a random unit of X then we would divide Y dollars by X dollars and write,
Y dollars = Y/X random units of X.
Then compute E[Y/X] = 1.25 (random units of X) as above.
That makes sense to me but I suspect there's an established mathematical formalism for doing something like this. It vaguely reminds me of something like a change of measure but I was always very weak on that concept.
Anyway, I agree. Thanks, and back to Sleeping Beauty.
PairTheBoard
See my last post. What are your thoughts on this:
Suppose a fair coin is flipped. If heads one Black ball is put in a bag. If tails two White balls are put in the bag. Let X be a random ball produced by one run of the experiment. What is the probability X is Black? I can easily describe a natural way to define a "random ball produced by the experiment" by letting the experiment run then randomly picking a ball from the bag. By that extension of the experiment the probability is 1/2 that the "random ball" is Black. I see no natural way to describe how one run of the experiment produces a "random ball" with 1/3 probability for Black.
PairTheBoard
Suppose a fair coin is flipped. If heads one Black ball is put in a bag. If tails two White balls are put in the bag. Let X be a random ball produced by one run of the experiment. What is the probability X is Black? I can easily describe a natural way to define a "random ball produced by the experiment" by letting the experiment run then randomly picking a ball from the bag. By that extension of the experiment the probability is 1/2 that the "random ball" is Black. I see no natural way to describe how one run of the experiment produces a "random ball" with 1/3 probability for Black.
PairTheBoard
It's pretty hard to recreate the whole amnesia thing with the ball experiment, but we can't just leave that part out.
Each interview is accompanied by a wager. Halfers agree that SB will lose money if she bets 1:1 on heads.
Halfers claim this is because she loses twice as much when wrong.
Given than interviews only happen w/ wagers, and wagers only happen w/ interviews, how can you possibly argue this is true without admitting that p(t|in interview)>p(h|in interview)?
I know this is going back, but I don't feel that anyone has successfully rebutted this proof.
Wouldn't SSA claim that she breaks even betting 1:1 on heads?
Halfers claim this is because she loses twice as much when wrong.
Given than interviews only happen w/ wagers, and wagers only happen w/ interviews, how can you possibly argue this is true without admitting that p(t|in interview)>p(h|in interview)?
I know this is going back, but I don't feel that anyone has successfully rebutted this proof.
Wouldn't SSA claim that she breaks even betting 1:1 on heads?
Absolutely not if I'm only interested in maximizing points or dollars. But what do the points buy? If they are scrip for an auction, and I have a choice between the twice the amount everyone else has, or half of what everyone else has, with equal probability, then of course I'd be a fool not to switch since I can buy more stuff. In that case the proper numeraire is not points but (my points)/(their points).
Are you saying that it is correct to switch once you see 50.000$ in the envelope ?
What about 50.000 points ? (just in toy game, they don't buy anything but you try to maximize the amount).
What I am saying is this: without knowing underlying distribution of amounts in envelopes we can't say that probability of other envelope containing 2x our amount (after we see it) is 1/2 (or anything for that matter). The only thing which we can do is to guess the distribution and then say what probabilities are based on that guess. If those are points, dollars or anything doesn't matter in principle, it may only influence our guesses (as points are a bit easier to give away than dollars but still we can safely assume that the bigger the number the less probable it is in real world settings).
If we denote our amount as X (before seeing it) we can construct a function of X which tells us what is expected value in other envelope. This function would look something like that: (assuming only whole amount in envelopes)
For odd X: EV(other env) = 2X
For even X: EV(other env) = some function which is smaller and smaller with increasing X, probably pretty close to 1.25X for small X'es and less and less for bigger X's. It entirely depend on our guess about the process which was used to put amounts in envelopes.
If we try to calculate expected value of expected value in other envelope then again we hit a wall without knowing the distribution. It might well be that this expected value is +infinity (if for example process similar to the one here: http://plato.stanford.edu/entries/paradox-stpetersburg/ was used to determine amount in 1st envelope) or it might be anything else, again without knowing the process we can't say anything meaningful about EV.
Do you disagree with anything I am saying here ?
To give one example: assume we somehow know that our rich friend likes numbers 5 and 10 and put one of them in 1st envelope (he used a coin flip to choose between them and doubled it for the other. We choose one envelope, should we switch ?
Well, our function determining expected value in other envelope based on our amount X is this:
if X = 5$ then EV(other env) = 10$
if X = 10$ then EV(other env) = 1/2*5$ + 1/2*20$ = 12.5$
if X = 20$ then EV(other env) = 10$
What is EV of EV ?
Well, it's 1/4*10$ + 1/2*12.5$ + 1/4*10$ = 11.25$
But what is EV in our envelope to begin with ?
1/4*5$ + 1/2*10$ + 1/4*20 = 1.25 + 5 + 5 = 11.25$
(notice the we concluded from knowing the process which was used to put dollars in envelopes that probability distribution for X is: P(X = 5) = 1/4, P(X=10) = 1/2, P(X=20) = 1/4).
So "expected holding" in other envelope is the same as expected holding in our envelope. It doesn't matter if we switch. It also didn't matter if we use dollars, points or w/e.
Such reasoning works for every distribution other than the ones which produce infinite expected value but then it's meaningless to state that EV in one is 1.25*EV in the other or to compare them in any other way.
Numeraire is a convenient concept that generalizes probability calculations broadly enough to defend both answers in the two paradoxes under discussion. It also is tremendously useful for practical decision-making, and many common errors both by laypeople and quants can be traced to ignoring numeraire.
I don't claim numeraire is the only or best way to generalize probability theory, and no one has embedded it in a rigorous formulation. I hope someday someone will, or perhaps will come up with something better in the attempt.
I don't claim numeraire is the only or best way to generalize probability theory, and no one has embedded it in a rigorous formulation. I hope someday someone will, or perhaps will come up with something better in the attempt.
(But it is still interesting and nontrivial to wonder what properties are preserved by change of numeraire, notably no-arbitrage properties (math paper))
For the record:
are all wrong.
They all come from the same basic error which is saying that if amount in our envelope is x then P(there is 2x in other envelope) = 1/2.
Does anybody still defends that reasoning ? I can see that many posts were made with that mistake so I don't think there is any reason to reply to them unless there still some people who would like to defend that argument.
If you represent the amount in your envelope as X, you know the other envelope has a 50% chance of holding X/2 and a 50% chance of holding 2X. So the expected holding is 1.25*X.
Therefore, if X is your numeraire, you increase your expected value by switching.
You are offered the chance to instead take the point total from the other envelope. You should do it, because half the time you'll approximately double your goods from the auction and the other half you'll cut it approximately in half, on average you'll wind up with more goods.
They all come from the same basic error which is saying that if amount in our envelope is x then P(there is 2x in other envelope) = 1/2.
Does anybody still defends that reasoning ? I can see that many posts were made with that mistake so I don't think there is any reason to reply to them unless there still some people who would like to defend that argument.
If I understand you, you would be describing a different experiment where waking on Heads-Tuesday Beauty is still in the New-Experiment but not in the Old part of the New Experiment. I don't think this changes anything anymore than if you added another 365 days outside the Old experiment for heads and 573 days outside the Old experiment for tails. When she awakens within the Old part of whatever New experiment you cook up, she reasons as before. After all, in the original Old Experiment, when she awoke she also knew she was not waking outside the Old Experiment.
PairTheBoard
PairTheBoard
I think I wasn't clear. Let me rephrase.
SB wakes up, realises it's Tuesday and no one is asking them whether it was heads or tails. Therefore SB can say with certainty that the coin landed on Heads. Information has been acquired here.
The opposite to this is SB waking up and realising they are still within the experiment. Again information has been acquired, and the info of 'I am in the experiment' suggests that heads only has 1/3 likelihood.
No they are not. Only the first one is wrong. It is not true that half the time the other will contain 2X and half the time the other will contain X/2. That's a fallacy because the other only contains 2X when X is the smaller value, and the other contains X/2 when X is the larger value, so we can't combine those in an equation to compute the EV. That's the error by equivocation resolution of the paradox. But the other 2 statements don't depepnd on this. It is still true that we switch when X is the numeraire because
E(Y/X) = 1.25
This is trivial and no one disuptes this except you. I'll repeat PairTheBoard's proof, but I'm getting tired of repeating stuff.
P(Y/X = 2) = 0.5
P(Y/X = 1/2) = 0.5
E(Y/X) = 0.5*2 + 0.5*(1/2) = 1.25
We don't need to know anything about the distribution to make this correct.
Now if you want to highlight wrong statements, pick this one:
E(Y) = 1.25*X
That was wrong, and it came from the fallacious argument.
In the auction example, we would want to switch because half the time we double our points (scrip) and half the time we halve it, so we end up being able to buy 1.25 times more goods on average. That's because we don't want to maximize our points, we want to maximize the ratio of our points to everyone else's points, which is the same as saying that everyone else's points, which is the same the points that we started with, is the numeraire.
E(Y/X) = 1.25
This is trivial and no one disuptes this except you. I'll repeat PairTheBoard's proof, but I'm getting tired of repeating stuff.
P(Y/X = 2) = 0.5
P(Y/X = 1/2) = 0.5
E(Y/X) = 0.5*2 + 0.5*(1/2) = 1.25
We don't need to know anything about the distribution to make this correct.
Now if you want to highlight wrong statements, pick this one:
E(Y) = 1.25*X
That was wrong, and it came from the fallacious argument.
In the auction example, we would want to switch because half the time we double our points (scrip) and half the time we halve it, so we end up being able to buy 1.25 times more goods on average. That's because we don't want to maximize our points, we want to maximize the ratio of our points to everyone else's points, which is the same as saying that everyone else's points, which is the same the points that we started with, is the numeraire.
PairTheBoard
But that's the point. When SB is having an awakening she is not having 2 awakenings. She is having one "random awakening" with unknown values for {h,t} and {M,T}. I think the example is analogous. You can't describe how such a "random awakening" is produced in the Sleeping Beauty experiment with 1/3 probability for (h,M) any more than you can describe how a "random ball" could be produced in the example with 1/3 probability for Black. That's not to say it can't be done. But in the 3-balls example it's clear the most natural way that comes to mind gives probability 1/2 for Black. That's essentially the same natural method for producing a "random awakening" for SB giving probability 1/2 for (h,M).
PairTheBoard
PairTheBoard
All you are saying is "Heads = white ball, tails = black" The black comes from a bigger pile which is irrelevant.
Now you might think this is analogous with SB. But if we bet on this, logic would hold, unlike in SB. Also if we iterated the experiment, 1/2er logic would hold, unlike in SB.
I think a better analogy would be:
Coin is flipped. If heads I'm going to throw a white ball at your head so hard that your memory of the past 10 minutes is erased, but the pain will linger for a couple minutes. If tails, I'm going to do that twice within 10 minutes with black balls. Every time you are hit on the head, I'm going to ask you, "Is it more likely than 50% that I just hit you with a black ball?"
Clearly the answer is yes.
If you are betting, 1/3 logic works. If you iterate the experiment, 1/3 logic works.
I think I wasn't clear. Let me rephrase.
SB wakes up, realises it's Tuesday and no one is asking them whether it was heads or tails. Therefore SB can say with certainty that the coin landed on Heads. Information has been acquired here.
The opposite to this is SB waking up and realising they are still within the experiment. Again information has been acquired, and the info of 'I am in the experiment' suggests that heads only has 1/3 likelihood.
SB wakes up, realises it's Tuesday and no one is asking them whether it was heads or tails. Therefore SB can say with certainty that the coin landed on Heads. Information has been acquired here.
The opposite to this is SB waking up and realising they are still within the experiment. Again information has been acquired, and the info of 'I am in the experiment' suggests that heads only has 1/3 likelihood.
In the Original the question is, "While IN the experiment, does SB learn something from awaking such that it's possible for her, While IN the experiment, to learn the negation of that thing?"
No. She learns she has awoken within the experiment and,While IN the experiment, she cannot learn that she has not awoken with the experiment.
Now you define a New Experiment. You now have two different propositions. She can learn she has awoken within the New Experiment. And she can learn she has awoken with the Old Part of the New Experiment. It remains the case in the New Experiment that "While in the New Experiment, when she awakens she learns she has awoken within the New Experiment and, while in the New Experiment, it is not possible for her to learn that she has not awoken within the New Experiment.
Now to relate the New Experiment to the Original problem you need to restrict your view of it to her experience while in the Old Part of the New Experiment. When doing that it's also still the case that, while in the Old Part, when she awakens and learns she has awoken within the Old Part, it is not possible for her, while in the Old Part, to learn that she has not awoken within the Old Part.
Now it's true in the New Experiment that when she awakens in the Old Part she learns something we didn't have before. She learns she has not awoken within the New Part of the experiment. But once again, to relate to the Original Problem we have to restrict our view to her experience while in the Old Part of the New Experiment. And if we do that and look at this new thing she learns we see, while in the Old Part, she cannot learn its negation. While in the Old Part, when she awakens and learns she has not awoken within the New Part, it is NOT possible for her, while in the Old Part to learn that she has not "not awoken within the New Part". And we also see that this new thing she learns is not really new. Learning she has not awoken with the New Part is equivalent to learning she has awoken within the Old Part. While in the Old Part she could not learn the negation of that in the Original and while in the Old Part she cannot learn the negation of that in the New Experiment.
It's a tricky dance of logic. I think I'll sit the next one out.
PairTheBoard
Thanks for taking the time to explain and for all the bolding.
I think I shall give up on this. It's clear people more intelligent than me have a different view, but alas alas I cannot seem to change mine.
I think I shall give up on this. It's clear people more intelligent than me have a different view, but alas alas I cannot seem to change mine.
The question is, If X is a "random ball produced by the experiment" what is the probability X is a Black ball?
I don't want to wade through the confusion caused by not keeping the details straight.
PairTheBoard
This is a horrible, horrible approach to math. The problem isn't a paradox. The answer is trivially easy. To not acknowledge that is a crime against maths. That's fine that you wish the problem had more depth, but it doesn't. Poor guys like Punter and ActionJeff are going to walk away from this thread with a worse understanding of math than they had before.
Given that the 1/3 answer is clearly correct, you then reason that the 1/2 must be incorrect and that arguments in favor of it are not opportunities to explore hidden assumptions in your reasoning, but errors. Instead of wasting time thinking about why they might be correct, the productive thing is to lay down principles that exclude them, so they don't creep into more complicated chains of reasoning.
I think this leads you to, with all due respect, fanatical positions like "crime against maths" and poor people being misled.
I start from the position that both arguments have some force, so there must be a hidden assumption separating them. I understand one version of the assumption is natural and more useful in general, and one is convoluted merely to force the other answer. You pour scorn on the convoluted assumption as if I'm a student trying to get credit for a clearly wrong answer. And you keep repeating the arguments that support your side. Making your side stronger only undercuts the other side if there is only one correct answer. I don't misunderstand or downplay the strength of the argument for 1/3, I just think there is a reason that 1/2 is appealing as well.
If you would walk with me this far, and say that the assumption that justifies 1/2 is too twisted to have relevance to any practical problem, I won't argue. I consider that basically a matter of opinion. If you say the assumption that justifies 1/2 is ruled out by the statement of the problem, again I see no point in debating it. There are two separate problems, one with a clear answer of 1/3 and one that admits two answers.
Where we part company is if you say there is no assumption that justifies 1/2. That's saying more than that 1/3 must be right, I agree 1/3 must be right. I just think 1/2 can be right as well.
Fair enough Aaron. Thanks for the thoughtful response.
Do you think there's any way the 1/2 answer can be incorporated into a demonstrably +ev response?
You gave the example of a reward with instant gratification, in which case she would not care about future/past awakenings.
To me ice cream is the best example of this. If she were rewarded for correct guesses of the coinflip with ice cream, she would still pick tails and have more ice cream for it.
I'm trying to use the example you gave, but perhaps I'm misinterpreting it.
Also PairTheBoard: Do you think both answers can be correct, or do you think only 1/2 works?
Do you think there's any way the 1/2 answer can be incorporated into a demonstrably +ev response?
You gave the example of a reward with instant gratification, in which case she would not care about future/past awakenings.
To me ice cream is the best example of this. If she were rewarded for correct guesses of the coinflip with ice cream, she would still pick tails and have more ice cream for it.
I'm trying to use the example you gave, but perhaps I'm misinterpreting it.
Also PairTheBoard: Do you think both answers can be correct, or do you think only 1/2 works?
I didn't read beyond this. The experiment is: Whan heads 1 black ball goes in the empty bag. When tails 2 white balls go in the empty bag.
The question is, If X is a "random ball produced by the experiment" what is the probability X is a Black ball?
I don't want to wade through the confusion caused by not keeping the details straight.
PairTheBoard
The question is, If X is a "random ball produced by the experiment" what is the probability X is a Black ball?
I don't want to wade through the confusion caused by not keeping the details straight.
PairTheBoard
I can show your problem is not analogous to OP. If SB wagers money on heads, she loses. If she wagers on white in your experiment, she breaks even.
Furthermore, if we iterate SB, tails is clearly more likely. If we iterate your example, it's 50/50.
To have a scenario be analogous, these disparities can't exist.
I think a better analogy would be:
Coin is flipped. If heads I'm going to throw a white ball at your head so hard that your memory of the past 10 minutes is erased, but the pain will linger for a couple minutes. If tails, I'm going to do that twice within 10 minutes with black balls. Every time you are hit on the head, I'm going to ask you, "Is it more likely than 50% that I just hit you with a black ball?"
Clearly the answer is yes.
If you are betting, 1/3 logic works. If you iterate the experiment, 1/3 logic works.
Coin is flipped. If heads I'm going to throw a white ball at your head so hard that your memory of the past 10 minutes is erased, but the pain will linger for a couple minutes. If tails, I'm going to do that twice within 10 minutes with black balls. Every time you are hit on the head, I'm going to ask you, "Is it more likely than 50% that I just hit you with a black ball?"
Clearly the answer is yes.
If you are betting, 1/3 logic works. If you iterate the experiment, 1/3 logic works.
This sums up our disagreement. I think you start from the assumption that there must be a single correct probability for SB. That's common sense, but so is the assumption that there must be a single measure of time or mass that applies to all observers.
Given that the 1/3 answer is clearly correct, you then reason that the 1/2 must be incorrect and that arguments in favor of it are not opportunities to explore hidden assumptions in your reasoning, but errors. Instead of wasting time thinking about why they might be correct, the productive thing is to lay down principles that exclude them, so they don't creep into more complicated chains of reasoning.
I think this leads you to, with all due respect, fanatical positions like "crime against maths" and poor people being misled.
I start from the position that both arguments have some force, so there must be a hidden assumption separating them. I understand one version of the assumption is natural and more useful in general, and one is convoluted merely to force the other answer. You pour scorn on the convoluted assumption as if I'm a student trying to get credit for a clearly wrong answer. And you keep repeating the arguments that support your side. Making your side stronger only undercuts the other side if there is only one correct answer. I don't misunderstand or downplay the strength of the argument for 1/3, I just think there is a reason that 1/2 is appealing as well.
If you would walk with me this far, and say that the assumption that justifies 1/2 is too twisted to have relevance to any practical problem, I won't argue. I consider that basically a matter of opinion. If you say the assumption that justifies 1/2 is ruled out by the statement of the problem, again I see no point in debating it. There are two separate problems, one with a clear answer of 1/3 and one that admits two answers.
Where we part company is if you say there is no assumption that justifies 1/2. That's saying more than that 1/3 must be right, I agree 1/3 must be right. I just think 1/2 can be right as well.
Given that the 1/3 answer is clearly correct, you then reason that the 1/2 must be incorrect and that arguments in favor of it are not opportunities to explore hidden assumptions in your reasoning, but errors. Instead of wasting time thinking about why they might be correct, the productive thing is to lay down principles that exclude them, so they don't creep into more complicated chains of reasoning.
I think this leads you to, with all due respect, fanatical positions like "crime against maths" and poor people being misled.
I start from the position that both arguments have some force, so there must be a hidden assumption separating them. I understand one version of the assumption is natural and more useful in general, and one is convoluted merely to force the other answer. You pour scorn on the convoluted assumption as if I'm a student trying to get credit for a clearly wrong answer. And you keep repeating the arguments that support your side. Making your side stronger only undercuts the other side if there is only one correct answer. I don't misunderstand or downplay the strength of the argument for 1/3, I just think there is a reason that 1/2 is appealing as well.
If you would walk with me this far, and say that the assumption that justifies 1/2 is too twisted to have relevance to any practical problem, I won't argue. I consider that basically a matter of opinion. If you say the assumption that justifies 1/2 is ruled out by the statement of the problem, again I see no point in debating it. There are two separate problems, one with a clear answer of 1/3 and one that admits two answers.
Where we part company is if you say there is no assumption that justifies 1/2. That's saying more than that 1/3 must be right, I agree 1/3 must be right. I just think 1/2 can be right as well.
Originally Posted by PairTheBoard
"I didn't read beyond this. The experiment is: Whan heads 1 white ball goes in the empty bag. When tails 2 black balls go in the empty bag.
The question is, If X is a "random ball produced by the experiment" what is the probability X is a White Ball?
I don't want to wade through the confusion caused by not keeping the details straight."
ok, I give up. You seem to insist Heads be 1 White Ball and Tails be 2 Black Balls, so I've changed the experiment as shown in bold above to suit you.
You still don't get the idea here. In my experiment, nowhere do I define what it actually means to be a "random ball produced by the experiment." That can be done by physically extending the experiment by following it with picking a random ball from the bag. But that's an extension not included in the actual experiment. There is nothing stopping you from just declaring that it makes the most sense to define a "random ball produced by the experiment" as having probability 1/3 for White and 2/3 for Black.
You might even justify that definition by pointing out that IF the experiment is extended by giving you $1 for each ball produced then the 1/3-2/3 probabilities are best for EV calculations. Or you could justify 1/3-2/3 by extending the experiment with throwing each ball produced at my head giving me short term pain and amnesia. Thus painfully waking me up to the fact that half the time there are indeed 2 black balls in the bag and after being hit by a "Random Ball produced by the experiment" I should think it more likely to have been a Black one.
So the probabilities you assign to a "Random Ball produced by the experiment" depend on how you decide the experiment ought to be extended in order to define those probabilities. The thing is, you are begging the question if you say, "the probabilities must be what I say because this is how I say the experiment ought to be extended to justify the probabilities I claim." The extension you give is just another way of saying the assertion of probabilities you claim.
My contention is that's the same case with Sleeping Beauty. I don't agree with AB however that the 1/2 assumptions are more convoluted in Sleeping Beauty anymore than the 1/2 assumptions are more convoluted in my Random Ball example.
I think this covers it for me, and also addresses your post #608.
PairTheBoard
"I didn't read beyond this. The experiment is: Whan heads 1 white ball goes in the empty bag. When tails 2 black balls go in the empty bag.
The question is, If X is a "random ball produced by the experiment" what is the probability X is a White Ball?
I don't want to wade through the confusion caused by not keeping the details straight."
The rest of my post applies directly even though I did misread your hypothetical.
I can show your problem is not analogous to OP. If SB wagers money on heads, she loses. If she wagers on white in your experiment, she breaks even.
Furthermore, if we iterate SB, tails is clearly more likely. If we iterate your example, it's 50/50.
To have a scenario be analogous, these disparities can't exist.
I can show your problem is not analogous to OP. If SB wagers money on heads, she loses. If she wagers on white in your experiment, she breaks even.
Furthermore, if we iterate SB, tails is clearly more likely. If we iterate your example, it's 50/50.
To have a scenario be analogous, these disparities can't exist.
You still don't get the idea here. In my experiment, nowhere do I define what it actually means to be a "random ball produced by the experiment." That can be done by physically extending the experiment by following it with picking a random ball from the bag. But that's an extension not included in the actual experiment. There is nothing stopping you from just declaring that it makes the most sense to define a "random ball produced by the experiment" as having probability 1/3 for White and 2/3 for Black.
You might even justify that definition by pointing out that IF the experiment is extended by giving you $1 for each ball produced then the 1/3-2/3 probabilities are best for EV calculations. Or you could justify 1/3-2/3 by extending the experiment with throwing each ball produced at my head giving me short term pain and amnesia. Thus painfully waking me up to the fact that half the time there are indeed 2 black balls in the bag and after being hit by a "Random Ball produced by the experiment" I should think it more likely to have been a Black one.
So the probabilities you assign to a "Random Ball produced by the experiment" depend on how you decide the experiment ought to be extended in order to define those probabilities. The thing is, you are begging the question if you say, "the probabilities must be what I say because this is how I say the experiment ought to be extended to justify the probabilities I claim." The extension you give is just another way of saying the assertion of probabilities you claim.
My contention is that's the same case with Sleeping Beauty. I don't agree with AB however that the 1/2 assumptions are more convoluted in Sleeping Beauty anymore than the 1/2 assumptions are more convoluted in my Random Ball example.
I think this covers it for me, and also addresses your post #608.
PairTheBoard
, so we end up being able to buy 1.25 times more goods on average. That's because we don't want to maximize our points, we want to maximize the ratio of our points to everyone else's points, which is the same as saying that everyone else's points, which is the same the points that we started with, is the numeraire.
You only need to guess a distribution if you want to make statements about EV of absolute amount of points/dollars in other envelope.
I think this is a bit different problem than original 2 envelope one though and it's big stretch to say that numeraire idea "solves it".
This is a horrible, horrible approach to math. The problem isn't a paradox. The answer is trivially easy. To not acknowledge that is a crime against maths. That's fine that you wish the problem had more depth, but it doesn't. Poor guys like Punter and ActionJeff are going to walk away from this thread with a worse understanding of math than they had before.
I have quite good math education myself (at least for people who doen't have math degree) as well as several friends who are mathematicians. I am yet to find someone convinced by 1/3'ers arguments in this thread in that group but I will keep polling.
Did you find anybody with solid math background who would agree with you on this one btw ?
I am not saying that this problem as formulated has clear solution and maybe it needs to be formulated in better way to be meaningful. All I am saying that all arguments for 1/3 in this thread are based on some pretty weak assumptions or misunderstanding of frequency argument.
Each interview is accompanied by a wager. Halfers agree that SB will lose money if she bets 1:1 on heads.
Halfers claim this is because she loses twice as much when wrong.
Given than interviews only happen w/ wagers, and wagers only happen w/ interviews, how can you possibly argue this is true without admitting that p(t|in interview)>p(h|in interview)?
I know this is going back, but I don't feel that anyone has successfully rebutted this proof.
Wouldn't SSA claim that she breaks even betting 1:1 on heads?
Halfers claim this is because she loses twice as much when wrong.
Given than interviews only happen w/ wagers, and wagers only happen w/ interviews, how can you possibly argue this is true without admitting that p(t|in interview)>p(h|in interview)?
I know this is going back, but I don't feel that anyone has successfully rebutted this proof.
Wouldn't SSA claim that she breaks even betting 1:1 on heads?
Feedback is used for internal purposes. LEARN MORE