Open Side Menu Go to the Top
Register
Rationality in Newcomb's Paradox Rationality in Newcomb's Paradox

08-28-2009 , 11:35 PM
Quote:
Originally Posted by Aaron W.
I think I'm with you now.



At these odds, no. The overlay for the bookie is huge and the historical trend is too strong.



50,000 trials is a long, long time. I would take shorter spans with betting limits, or with the ability to set a stop-loss. But yes, I would be willing to wager money on this.
Good. I think that shows that with 100,000 trials where the algorithm has exhibited a track record of 70% accuracy you would make the Assumption that the algorithm will continue within a tight range of accuracy around 70%. That's essentially what you are doing by being willing to bet if and only if you have appropriate odds based on the 70% figure.

Now, that is the same Assumption that is at issue for you in the OP. At first you thought the OP specified the assumption and based on that assumption you analyzed it as "functionally equivalent" to the Predictor reading his prediction off of your actual decision. You still stick to that analysis if the Assumption is made for the OP. But you demur on making the assumption for the OP.

But in this realistic scenario, the weight of the statistics for 100,000 trials induces you to make the Assumption. You are willing to bet on it.

Now consider this "Postdictor" scenario for the 70% continued accuracy. Suppose when a player Accepts or Refuses, a Message to that effect is sent to a Postdictor. Except significant Noise is added to the Message, making it a Garbled Message which the Postdictor can only read with 70% accuracy. If the message says the player Accepted, the postdictor reads the Garbled Message as "Accepted" 70% of the time and "Refused" 30% of the time. If the message says the player Refused, the postdictor reads the Garbled Message as "Refused" 70% of the time and "Accepted" 30% of the time. The Postdictor deposits $1M in the bank for all Garbled messages he reads as "Refused".

Under the assumption you are willing to make - as indicated by your betting preferences - that the Algorithm will continue in a tight range of accuracy around 70%, I claim you are unable to distinguish the output of the Algorithm from the output of the Postdictor. To use your termonology, the Algorithm and the Postdictor's outputs are "functionally equivalent". Therefore, you are justified in calculating the EV for your decision to Accept or Refuse just as you would if you knew the $1M was the result of the Postdictor's output. That EV would be the same as I said the Bookie would calculate above. If the Postdictor was depositing the $1M you would clearly want to Refuse.

It's the same argument you made for the OP under the Assumption that the Predictor continues to be extremely accurate. There is no difference in principle once the Assumption is made. And in the realistic scenario with 100,000 trials you are compelled to admit making the assumption when questioned about your betting preferences.


PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 01:10 AM
Quote:
Originally Posted by PairTheBoard
Good. I think that shows that with 100,000 trials where the algorithm has exhibited a track record of 70% accuracy you would make the Assumption that the algorithm will continue within a tight range of accuracy around 70%. That's essentially what you are doing by being willing to bet if and only if you have appropriate odds based on the 70% figure.

Now, that is the same Assumption that is at issue for you in the OP. At first you thought the OP specified the assumption and based on that assumption you analyzed it as "functionally equivalent" to the Predictor reading his prediction off of your actual decision. You still stick to that analysis if the Assumption is made for the OP. But you demur on making the assumption for the OP.

But in this realistic scenario, the weight of the statistics for 100,000 trials induces you to make the Assumption. You are willing to bet on it.

Now consider this "Postdictor" scenario for the 70% continued accuracy. Suppose when a player Accepts or Refuses, a Message to that effect is sent to a Postdictor. Except significant Noise is added to the Message, making it a Garbled Message which the Postdictor can only read with 70% accuracy. If the message says the player Accepted, the postdictor reads the Garbled Message as "Accepted" 70% of the time and "Refused" 30% of the time. If the message says the player Refused, the postdictor reads the Garbled Message as "Refused" 70% of the time and "Accepted" 30% of the time. The Postdictor deposits $1M in the bank for all Garbled messages he reads as "Refused".

Under the assumption you are willing to make - as indicated by your betting preferences - that the Algorithm will continue in a tight range of accuracy around 70%, I claim you are unable to distinguish the output of the Algorithm from the output of the Postdictor. To use your termonology, the Algorithm and the Postdictor's outputs are "functionally equivalent". Therefore, you are justified in calculating the EV for your decision to Accept or Refuse just as you would if you knew the $1M was the result of the Postdictor's output. That EV would be the same as I said the Bookie would calculate above. If the Postdictor was depositing the $1M you would clearly want to Refuse.

It's the same argument you made for the OP under the Assumption that the Predictor continues to be extremely accurate. There is no difference in principle once the Assumption is made. And in the realistic scenario with 100,000 trials you are compelled to admit making the assumption when questioned about your betting preferences.


PairTheBoard
I agree that the postdictor is functionally equivalent to the original scenario. In fact, the postdictor is essentially the same thing as the cheater (if you just decrease the noise in the system). And so it's all the same, and you can't tell which game is actually being played.

However, I don't agree that the bookie is really playing the same game. It is the nature of being a bookie that you set up the lines based on predictive models. This is what I *THINK* the odds should be. I can run a deep statistical analysis to try to close in on the precise odds, but all of this is just guesswork. None of the odds you set actually impact anything that happens. Once you accept the wagers, you just sit back and watch the game. For all you know, the predictor has a massive cold and is drunk on Nyquil and will only be 10% right over the next 50,000 hands. If this happens, you lose (but you can still convince yourself that you probably set a good line and got unlucky because of the cold).

So while I'm willing to set up as a bookie based on the evidence, this is not what is being asked of the participant of the game itself. Let me set up a few conversations between the announcer (PTB) and the player (AW):

Conversation 1:

Quote:
PTB: Here is $1K. But there's a catch. Last week, some rich guy named durka made a prediction. If he predicted that you would give me back the $1K, he secretly put $1M in your bank account. But if he predicted that you would take the $1K, he didn't. He was right 70% of the time in his predictions. Nothing you do now will influence where the money is.

AW: So it doesn't matter whether I take the money or give it back, you're telling me that the decision about the $1M has already been set.

PTB: Yes.

AW: You're absolutely sure about this?

PTB: Yes.

AW: I'll keep the $1K.
Conversation 2:

Quote:
PTB: Here is $1K. You have two options. You can wager $1K to win $1M or you can keep the $1K and freeroll for $1M. In the past, 70% of the people who wagered the $1K won the $1M, and 30% of the people who freerolled won the $1K.

AW: In the past... what are the odds right now?

PTB: I don't know, but that's how it has gone for the last 50,000 people.

AW: You can't give me the odds THIS TIME?

PTB: No, but 50,000 people is a lot of people.

AW: I'll keep the $1K.
Conversation 3:

Quote:
PTB: Here is $1K. You have two options. You can wager $1K to win $1M or you can keep the $1K and freeroll for $1M. In the past, 70% of the people who wagered the $1K won the $1M, and 30% of the people who freerolled won the $1K.

AW: In the past... what are the odds right now?

PTB: I don't know, but that's how it has gone for the last 50,000 people.

AW: It seems like a good wager. Here's the $1K.
Conversations 2 and 3 look at the game from a gambler's perspective. The player is making a choice between two games, and the choice DEFINITELY affects the outcome. He either enters into the $1K wager, or he freerolls. Both conclusions are reasonable because he is actively deciding whether he trusts the historical trend. In other words, there's a causal link between the decision to keep the money or leave it and the $1M in the bank, and the player is making his decision based on whether he trusts the historical trend. That's why both endings are plausible.

Conversation 1 is different because the causal link has been severed. This is why the conclusion is so clear.

In each of the three scenarios, you're setting up a different belief upon which the decision is made:
* I believe that my decision to take the $1K does not influence the outcome of the $1M
* I don't believe there's sufficient reason to trust the historical trend
* I believe there's sufficient reason to trust the historical trend

The bookie scenario is based on belief #3 and cannot ever be based on belief #1 (or anything remotely like it). You're setting yourself up for something completely different from the very beginning. This is why the bookie scenario is not the same game as the player plays.

(BTW - The player loses in the cheater scenario if he believes #1 is true. But who says that the belief corresponds to the reality of the situation? The logical choice could well be "wrong" in the sense of not winning the most money because it is founded upon an assumption that turned out to be false.)

Last edited by Aaron W.; 08-29-2009 at 01:12 AM. Reason: I sound schitzo in the bookie paragraph. I wrote it twice, once with me being the bookie and once with "you". I won't fix it
Rationality in Newcomb's Paradox Quote
08-29-2009 , 08:32 AM
I'm afraid I'm not finding the consistency of your arguments here very satisfying. I think there's something very ambiguous going on with the meaning of the word "probability" when the probability of a correct output (accuracy) is assumed for either the 70%-Algorithm or the 99.9%-Predictor.

You are willing to asssume a "probability" of 70% for a correct Algorithm output for Betting Purposes. You agree you cannot distinguish the Algorithm's output form that of the Postdictor (or Garbled-Cheater if you prefer). You agree those two outputs are therefore "functionally" equivalent. But you no longer accept the conclusions you've previously drawn for the Player under those conditions. Your objection, as I see it, basically being that assuming the "probability" for betting purposes is not the same as assuming the "probability". Yet by any meaning of the word "probability" I know of, it is basically defined by how it relates to "betting purposes".

PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 11:54 AM
Consider a gigantic jar of coins, filled with some number of coins with both sides heads, and some equal number with both sides tails. The player selects a coin randomly (somehow), and the bookie is willing to make a bet if the player lays 1.05:1. Neither person knows if it's a H/H or T/T coin before the bet or the flip. Over a large sample of bets, the bookie will win (with high probability), and the H/T ratio will be near 1 (again, high probability). Over the next large sample of bets, the same will be true. To somebody who never examines the physical coins, the results are indistinguishable from flipping a fair coin every time.

So, applied to exactly 1 trial, "the next one", what does this mean? The player selects a coin. An omniscient being knows that the probability is either 100% heads or 100% tails, depending on which coin was selected. The bookie and the player don't. They think it's 50/50 even though the result of the flip has already been determined. Now, if the jar is filled with fair coins, then the omniscient being and the bookie/player would agree that it's 50/50 (assuming some truly random flipping method exists, etc.) What does this tell us? That we can't distinguish the two scenarios based on past performance, and that it is clearly not correct to state an objective, absolute probability of 50% FOR ONE TRIAL, even based on an arbitrarily large number of trials that show 50% past performance. If you were willing to state that with certainty, and wager on it, somebody who sees the coin being selected would bankrupt you. You can't claim an absolute probability if you're exploitable based on that belief.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 12:01 PM
What's your point? There's no point here.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 12:19 PM
PtB and AaronW will get it. I'm beyond caring about you.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 12:27 PM
Quote:
Originally Posted by Concerto
My position is that free will by definition can defeat any prediction scheme that is based only on the measurable determinants of choice. Put another way, the prediction and your choice can have the same causes yet still differ in outcome, because free will must have a component independent of what precedes it or be a meaningless concept.
Quote:
Originally Posted by madnak
Yet another reason why free will is a meaningless concept.
Or maybe it means the future is not completely determined by the past, and consciousness participates in this indeterminacy.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 02:22 PM
Quote:
Originally Posted by PairTheBoard
I'm afraid I'm not finding the consistency of your arguments here very satisfying. I think there's something very ambiguous going on with the meaning of the word "probability" when the probability of a correct output (accuracy) is assumed for either the 70%-Algorithm or the 99.9%-Predictor.
The "assumed" probability is irrelevant. The conversations I laid out would hold true regardless of the values whether they're 70% or 99.9%. The difference is merely emotional.

The two concepts here are "probability" and "causality." In each of the three conversations, AW is presented with a probability. In the first conversation, he has been given a probability without causality. The choice is independent of the outcome, even though the outcome is stated in probabilistic terms.

In the last two conversations, AW has a probability WITH causality. He *MUST* make a decision between two wagers, and the choice of the wager affects the outcome of the situation.

I grant that these are all functionally equivalent. And by this, I mean that you cannot compute anything about these scenarios that would make one different from another. All the possible choices have the same the probabilities of the outcomes are the same. What is different is the BELIEF about the nature of the decision.

The BELIEF of non-causality leads on to always take the money. The BELIEF of causality leads one to conjecture about the future probabilities based on the history. The bookie is always working in the latter case (future probabilities) but the player is working in the former (non-causality).

In the end, it doesn't matter whether this is a legitimate predictor, postdictor, computer simulation, cheater, or whatever. The decision is based on the BELIEFS of the person making the decision at the point of the decision.

Edit: The actual probabilities of these events are unknown, which is why it is required to insert a BELIEF about the nature of the system in order to draw a conclusion. The BELIEF may or may not be valid (which is where the functional equivalence comes into play).
Rationality in Newcomb's Paradox Quote
08-29-2009 , 03:18 PM
Quote:
Originally Posted by TomCowley
Consider a gigantic jar of coins, filled with some number of coins with both sides heads, and some equal number with both sides tails. The player selects a coin randomly (somehow), and the bookie is willing to make a bet if the player lays 1.05:1. Neither person knows if it's a H/H or T/T coin before the bet or the flip. Over a large sample of bets, the bookie will win (with high probability), and the H/T ratio will be near 1 (again, high probability). Over the next large sample of bets, the same will be true. To somebody who never examines the physical coins, the results are indistinguishable from flipping a fair coin every time.

So, applied to exactly 1 trial, "the next one", what does this mean? The player selects a coin. An omniscient being knows that the probability is either 100% heads or 100% tails, depending on which coin was selected. The bookie and the player don't. They think it's 50/50 even though the result of the flip has already been determined. Now, if the jar is filled with fair coins, then the omniscient being and the bookie/player would agree that it's 50/50 (assuming some truly random flipping method exists, etc.) What does this tell us? That we can't distinguish the two scenarios based on past performance, and that it is clearly not correct to state an objective, absolute probability of 50% FOR ONE TRIAL, even based on an arbitrarily large number of trials that show 50% past performance. If you were willing to state that with certainty, and wager on it, somebody who sees the coin being selected would bankrupt you. You can't claim an absolute probability if you're exploitable based on that belief.
I don't think you need to go through all that. Just flip a fair coin. It is either heads with 100% probability or tails with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50-50. The same with a shuffled deck of cards. The top card is either red with 100% probability or black with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50% for red and 50% for black.

What do we mean when we assume the probability is 50% in these cases? From the frequentist interpretation we mean that if we exactly repeat the experiment a large number of times we will observe Heads with a frequency that aproaches 50%. Or from a bayesian interpretation we mean that we have 50% confidence in the belief or credence that this particuliar coin is resting with Heads up. Both interpretations are subject to verification by way of odds demanded to bet on it.

I'm interested in examples like this which you can think of to shed light on what we mean by assuming the probability of the 99.9%-Predictor or 70%-Algorithms's output being correct. But in this case I don't see how you tie it in.



PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 03:22 PM
What about the decay of an atom? Is that 100% to be what it is?
Rationality in Newcomb's Paradox Quote
08-29-2009 , 03:44 PM
Quote:
Originally Posted by Aaron W.
The "assumed" probability is irrelevant. The conversations I laid out would hold true regardless of the values whether they're 70% or 99.9%. The difference is merely emotional.

The two concepts here are "probability" and "causality." In each of the three conversations, AW is presented with a probability. In the first conversation, he has been given a probability without causality. The choice is independent of the outcome, even though the outcome is stated in probabilistic terms.

In the last two conversations, AW has a probability WITH causality. He *MUST* make a decision between two wagers, and the choice of the wager affects the outcome of the situation.

I grant that these are all functionally equivalent. And by this, I mean that you cannot compute anything about these scenarios that would make one different from another. All the possible choices have the same the probabilities of the outcomes are the same. What is different is the BELIEF about the nature of the decision.

The BELIEF of non-causality leads on to always take the money. The BELIEF of causality leads one to conjecture about the future probabilities based on the history. The bookie is always working in the latter case (future probabilities) but the player is working in the former (non-causality).

In the end, it doesn't matter whether this is a legitimate predictor, postdictor, computer simulation, cheater, or whatever. The decision is based on the BELIEFS of the person making the decision at the point of the decision.

Edit: The actual probabilities of these events are unknown, which is why it is required to insert a BELIEF about the nature of the system in order to draw a conclusion. The BELIEF may or may not be valid (which is where the functional equivalence comes into play).
It looks like you let your emotions get the best of you with your analysis in post #235. You went from an asumption of 99.9% accuracy (per the 999/1000 track record) to an assumption of 1-epsilon accuracy to an assumpton of "essentially perfect" accuracy to the assumption that "the predictor is always correct". Your previous argument in favor of Refusal depends on an unjustified sliding up to predictor infallibilty. You can't get away with that in the 70% case and now you admit the 99.9% case is no different in principle than the 70% case. So your comments and logic below, justifying Refusal, really don't apply to Newcomb's Problem as presented in the OP. Do they.

Arguing for Refusal as you did before depends on assuming infallibility of the Predictor.




From Post #235
Quote:
Originally Posted by Aaron W.
I am still of the mindset that the paradox is understood by simply realizing that the claim that the predictor's prediction is "independent" of your choice is inconsistent. It was already pointed out that it is not possible to distinguish between a true predictor and a cheating one if you take the underlying assumption that the predictor is essentially perfect. Therefore, this is functionally equivalent to playing against a cheater.

If you define "winning" the game to be getting both amounts of money, then the assumption that the predictor is always correct is equivalent to assuming you can't win. And since you can't win, you should lose in a way that nets you the most money, by rejecting the $1000. If you change your strategy so that you try to win by taking the money, you lose because it has already been assumed you can't win.

In other words, if you assume that you can't win, then it's obvious that you can't win.

PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 03:54 PM
Quote:
Originally Posted by durkadurka33
What about the decay of an atom? Is that 100% to be what it is?
You should provide some indication of who you're responding to. A minimal quote is nice.

If you're talking to me; After the coin is flipped, remaining covered, it is either heads or tails. It is what it is. After the decay of the atom has taken place, but before we are informed of the timing of the decay, the timing was what it was. However, from the perspective of not knowing, we would still model it's probability as if it hadn't happened yet.


PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 04:15 PM
Quote:
Originally Posted by PairTheBoard
I don't think you need to go through all that. Just flip a fair coin. It is either heads with 100% probability or tails with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50-50. The same with a shuffled deck of cards. The top card is either red with 100% probability or black with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50% for red and 50% for black.

What do we mean when we assume the probability is 50% in these cases? From the frequentist interpretation we mean that if we exactly repeat the experiment a large number of times we will observe Heads with a frequency that aproaches 50%. Or from a bayesian interpretation we mean that we have 50% confidence in the belief or credence that this particuliar coin is resting with Heads up. Both interpretations are subject to verification by way of odds demanded to bet on it.

I'm interested in examples like this which you can think of to shed light on what we mean by assuming the probability of the 99.9%-Predictor or 70%-Algorithms's output being correct. But in this case I don't see how you tie it in.



PairTheBoard
This was the reason I picked an example with a couple of layers. In this case, even though the bookie, getting 1.05:1, is willing to bet ONE TRIAL after the coin is picked as though it's 50%, he knows, with absolute certainty, that the frequentist position of flipping that coin a large number of times will not converge to 50%. It can only be HHHHHH or TTTTTT. So when you say the probability is 50%, and wager on it, you simultaneously know that this statement cannot be true in the frequentist sense, and doesn't represent any kind of objective reality. And there's nothing wrong with any of this.

Before the coin is selected, the 50% statement is sound under the frequentist position and is objectively true (assuming the stipulated ability to choose a HH/TT coin with equal likelihood). Once the coin is selected, the bookie's personal probability doesn't change because he doesn't learn anything- he would still bet the same way with somebody who has identical information- but his statement is no longer objectively true, and he knows it. He would obviously not bet with somebody who could know what the coin is. He knows that his information is now insufficient to state what objective reality is.

In Newcomb's, assuming all the handwavy BS that durka thought was explicit in the problem, you have the same scenario. Before anything, you would bet as though the predictor is 99% (this may or may not be objectively true). Once he makes his choice, before you know what you're going to do (you really are "deliberating") your personal probabilities are that P(1m/refuse) = 99*P(1m/take), and that P(0/take)=99*P(0/refuse), and you would still wager accordingly with somebody with the same info. However, this statement is no longer objectively true, and it fails the frequentist definition. In this case, the frequentist interpretation will show that P(1m/take) + P(1m/refuse) = 1 or that P(1m/take) + P(1m/refuse) = 0, just like it showed HHHHHHHH or TTTTTTTT before.

If you accept the stipulation that the money was placed or not placed a week ago for your trial (this really is explicitly stipulated.. he could have been cheating before, but he's said not to be cheating in your trial), then for either one of these possibilites, your payout is better if you take. You cannot objectively state the probability of either possibility, or your actual EV, but you can objectively state that for either P(1m)=0 or P(1m)=1, taking the 1000 is preferable.

Once you make your decision, you would still be willing to wager based on 99% accuracy, with somebody with the same information, but this statement is also not objectively true, and fails the frequentist definition (in this case, you just look in the bank account over and over and get 1m1m1m1m1m1m1m or 000000000). You can be willing to wager based on your personal probability, while simultaneously knowing that those probabilites are not objectively true and cannot be used to calculate your EV (which is why using them in the "if I choose refuse, 1m is 99% likely" type of analysis is invalid). And there's nothing wrong with any of this.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 04:21 PM
Quote:
Originally Posted by PairTheBoard
It looks like you let your emotions get the best of you with your analysis in post #235. You went from an asumption of 99.9% accuracy (per the 999/1000 track record) to an assumption of 1-epsilon accuracy to an assumpton of "essentially perfect" accuracy to the assumption that "the predictor is always correct". Your previous argument in favor of Refusal depends on an unjustified sliding up to predictor infallibilty. You can't get away with that in the 70% case and now you admit the 99.9% case is no different in principle than the 70% case. So your comments and logic below, justifying Refusal, really don't apply to Newcomb's Problem as presented in the OP. Do they.

Arguing for Refusal as you did before depends on assuming infallibility of the Predictor.
In post #235, when I talk about an "essentially perfect" predictor, I'm working with the strategy applied IN THE LIMIT as e tends to 0. This follows from Post #20 where durka talks about building in an "arbitrary amount of error."

When I talked about the 70% or 99.9% not mattering, those are FIXED probabilities.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 04:26 PM
Quote:
Originally Posted by PairTheBoard
I don't think you need to go through all that. Just flip a fair coin. It is either heads with 100% probability or tails with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50-50. The same with a shuffled deck of cards. The top card is either red with 100% probability or black with 100% probability. It is what it is. It's from our perspective of not knowing that we say the probability is 50% for red and 50% for black.

What do we mean when we assume the probability is 50% in these cases? From the frequentist interpretation we mean that if we exactly repeat the experiment a large number of times we will observe Heads with a frequency that aproaches 50%.
Actually, if the experiment repeats exactly, then it will always produce the same result. Either 100% heads, or 100% tails.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 04:53 PM
Quote:
Originally Posted by TomCowley
In this case, the frequentist interpretation will show that P(1m/take) = P(1m/refuse) = 1 or that P(1m/take) = P(1m/refuse) = 0, just like it showed HHHHHHHH or TTTTTTTT before.
FMP, too late to edit.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 05:45 PM
Quote:
Originally Posted by PairTheBoard
You should provide some indication of who you're responding to. A minimal quote is nice.

If you're talking to me; After the coin is flipped, remaining covered, it is either heads or tails. It is what it is. After the decay of the atom has taken place, but before we are informed of the timing of the decay, the timing was what it was. However, from the perspective of not knowing, we would still model it's probability as if it hadn't happened yet.


PairTheBoard
You can probably guess to whom I was responding. (Hint, not you).

However, it would be wonderful if the 3 of you could succinctly tie this into the OP to show the relevance of the discussion.
Rationality in Newcomb's Paradox Quote
08-29-2009 , 09:21 PM
Quote:
Originally Posted by Aaron W.
In post #235, when I talk about an "essentially perfect" predictor, I'm working with the strategy applied IN THE LIMIT as e tends to 0. This follows from Post #20 where durka talks about building in an "arbitrary amount of error."

When I talked about the 70% or 99.9% not mattering, those are FIXED probabilities.
The trouble is, "IN THE LIMIT" doesn't make much sense here. For any epsilon, no matter how small, with an assumed accuracy of 1-epsilon the statement "You should Refuse" is False, according to your logic. Letting epsilon get smaller and smaller does not change the truth value of that statement. It remains False for all postiive epsilon, no matter how "aribitrarily small" you let epsilon be. There is a logical discontinuity in the limit. At assumed accuracy of 1 (the limit as epsilon goes to zero) the statement becomes True. That doesn't change the fact that for all arbitrarily small epsilon the statement remains False - at least according to your logic.

durkadurka has made it abundantly clear throughout this thread that under no conditions is he talking about an assumed accuracy of 100%. 1-e with e "arbitrarily small" is not the same as 1. You cannot wave your hands, issue the phrase "IN THE LIMIT", and treat it as such.

You should just admit your previous comments were in error.

PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 09:31 PM
Quote:
Originally Posted by jason1990
Actually, if the experiment repeats exactly, then it will always produce the same result. Either 100% heads, or 100% tails.
Right. I rushed that description. I'm still not sure how it should be described. I'll take another stab quoting from Wiki - On Frequency Probability.

Given a random experiment (whatever that means), For a frequentisit "The relative frequency of occurrence of an event, in a number of repetitions of the experiment, is a measure of the probability of that event."


PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 09:36 PM
Quote:
Originally Posted by TomCowley
This was the reason I picked an example with a couple of layers. In this case, even though the bookie, getting 1.05:1, is willing to bet ONE TRIAL after the coin is picked as though it's 50%, he knows, with absolute certainty, that the frequentist position of flipping that coin a large number of times will not converge to 50%. It can only be HHHHHH or TTTTTT. So when you say the probability is 50%, and wager on it, you simultaneously know that this statement cannot be true in the frequentist sense, and doesn't represent any kind of objective reality. And there's nothing wrong with any of this.

Before the coin is selected, the 50% statement is sound under the frequentist position and is objectively true (assuming the stipulated ability to choose a HH/TT coin with equal likelihood). Once the coin is selected, the bookie's personal probability doesn't change because he doesn't learn anything- he would still bet the same way with somebody who has identical information- but his statement is no longer objectively true, and he knows it. He would obviously not bet with somebody who could know what the coin is. He knows that his information is now insufficient to state what objective reality is.

In Newcomb's, assuming all the handwavy BS that durka thought was explicit in the problem, you have the same scenario. Before anything, you would bet as though the predictor is 99% (this may or may not be objectively true). Once he makes his choice, before you know what you're going to do (you really are "deliberating") your personal probabilities are that P(1m/refuse) = 99*P(1m/take), and that P(0/take)=99*P(0/refuse), and you would still wager accordingly with somebody with the same info. However, this statement is no longer objectively true, and it fails the frequentist definition. In this case, the frequentist interpretation will show that P(1m/take) + P(1m/refuse) = 1 or that P(1m/take) + P(1m/refuse) = 0, just like it showed HHHHHHHH or TTTTTTTT before.

If you accept the stipulation that the money was placed or not placed a week ago for your trial (this really is explicitly stipulated.. he could have been cheating before, but he's said not to be cheating in your trial), then for either one of these possibilites, your payout is better if you take. You cannot objectively state the probability of either possibility, or your actual EV, but you can objectively state that for either P(1m)=0 or P(1m)=1, taking the 1000 is preferable.

Once you make your decision, you would still be willing to wager based on 99% accuracy, with somebody with the same information, but this statement is also not objectively true, and fails the frequentist definition (in this case, you just look in the bank account over and over and get 1m1m1m1m1m1m1m or 000000000). You can be willing to wager based on your personal probability, while simultaneously knowing that those probabilites are not objectively true and cannot be used to calculate your EV (which is why using them in the "if I choose refuse, 1m is 99% likely" type of analysis is invalid). And there's nothing wrong with any of this.
This looks to me like it might be a pretty good explanation. I'm going to have to think about it a while though.

PairTheBoard
Rationality in Newcomb's Paradox Quote
08-29-2009 , 09:53 PM
The EV bit could have been a little clearer. Your EV calculation depends on information, so you can calculate your "EV" for various non-objective prop bets, but there is no reason for that "EV" to necessarily match the EV of reality when reality has extra relevant information, or the "EV" of any other level of knowledge with more or less information. When you are getting ready to start the coin example, you really can say about choosing heads "50% of the time, it works every time".
Rationality in Newcomb's Paradox Quote
08-29-2009 , 11:56 PM
Let me repeat my argument.

The reason for the "paradox" is because there are two contradicting assumptions:

1) The choice to accept/reject the $1K has no impact on the outcome of the $1M
2) The predictor will continue to be highly accurate (or even moderately accurate)

The first assumption defines a non-causal relationship between the decision and the outcome, and the second assumption defines a causal relationship between the decision and the outcome. You cannot be working with both assumptions at the same time. You are *EXPLICITLY* told to use assumption 1. However, any numerical analysis *REQUIRES* you to make assumption 2.

I've been arguing this position for quite a while:

Post #189:

Quote:
What is the "point" of a paradox? It's usually to show how errors in logic (intuitive or formal) and reasoning lead to strange conclusions. Most paradoxes are resolved by clarifying or changing the tacit assumptions that have been put on the system.

If you impose that the predictor has such powers, you change the problem in a way that makes the problem sort of pointless: If you make it so that the predictor is essentially perfect, then you simply can't "win" by getting both the $1K and the $1M because it has been ASSUMED that you can't "win."
Post #235:

Quote:
I am still of the mindset that the paradox is understood by simply realizing that the claim that the predictor's prediction is "independent" of your choice is inconsistent.
Post #240:

Quote:
The arbitrarily small epsilon error that is introduced to allow "can" win to try to break the causal link between your decision and the money in the bank account is an illusion. There's probably a lot of technical terms that can be thrown about trying to define what it means that there's a "causal" link. A naive definition, such as "causal" means that "the decision
has an effect on the outcome" would imply that the causal link exists


I would call it irrational to even accept the premises of the problem!
Post #252:

Quote:
I think this brings us to the real problem with correlation/causation in the original scenario. If we INSIST that the predictor has a fixed probability of being right or wrong then we are creating a causation between the person's choice and the predictor's choice. In reality, if someone is going to do this experiment, you can *KNOW* that the predictor has a track record, but that track record is not what determines the chances of the predictor being correct.
Post #262:

Quote:
If you treat everything as if it's a truly "fair" game, then you're playing a game of incomplete information and the conclusion you reach is based on the assumptions you provide. I solved the problem by assuming that the predictor is trying to be right. If you solve the problem under the assumption that the predictor is playing to protect his money, then you reach a different conclusion.
Post #283:

Quote:
Once you make your assumption, you cause the problem to change. You cannot make the assumption that the predictor is likely to be right without CREATING causality between your decision and the result.
Post #356:

Quote:
I balk at trying to say there is a "correct" path here. When you're dumped into a hypothetical situation, there are all sorts of ways you can take it. I still don't feel that accepting that assumption about the predictor's ongoing predictive powers is actually a useful one. It creates a situation where (as far as the analysis goes) you might as well assume he's cheating. In my mind, this is what is creating the contradiction.
...

Now that I've re-established what my position is, we can see how it applies to the particular argument.

Quote:
Originally Posted by PairTheBoard
The trouble is, "IN THE LIMIT" doesn't make much sense here. For any epsilon, no matter how small, with an assumed accuracy of 1-epsilon the statement "You should Refuse" is False, according to your logic. Letting epsilon get smaller and smaller does not change the truth value of that statement. It remains False for all postiive epsilon, no matter how "aribitrarily small" you let epsilon be.
Post #235 says:

Quote:
If you define "winning" the game to be getting both amounts of money, then the assumption that the predictor is always correct is equivalent to assuming you can't win. And since you can't win, you should lose in a way that nets you the most money, by rejecting the $1000. If you change your strategy so that you try to win by taking the money, you lose because it has already been assumed you can't win.
I am arguing that "you should refuse" is correct if you assume the future probability of the predictor is going to be highly accurate (assumption 2). It will be true under this assumption at some value that is a finite distance from 1. So in the limit (ie, past that tipping point) under the ASSUMPTION that the predictor will continue to be correct (ie, there exists causality), you should reject the money.

The whole point is that the conclusion you reach depends on which assumption you use. You cannot use both simultaneous because they contradict each other. This was why conversations 2 and 3 in Post #377 both made sense and make sense INDEPENDENT of the values used in the probabilities. You can see that taking the money is rejecting the future probability and rejecting the money is accepting the future probability.
Rationality in Newcomb's Paradox Quote
08-30-2009 , 12:41 AM
Quote:
Originally Posted by PairTheBoard
Right. I rushed that description. I'm still not sure how it should be described. I'll take another stab quoting from Wiki - On Frequency Probability.

Given a random experiment (whatever that means), For a frequentisit "The relative frequency of occurrence of an event, in a number of repetitions of the experiment, is a measure of the probability of that event."
Perhaps the problem is not with the description, but with the thing being described. The frequentists' repetitions cannot be literal repetitions, else they would all have the same outcome. They must differ somehow. In what way and by how much should two things differ before a frequentist may say they are the same? I believe the answer is both obvious and circular. They must differ in a way which is random!
Rationality in Newcomb's Paradox Quote
08-30-2009 , 08:27 AM
Quote:
Originally Posted by jason1990
Perhaps the problem is not with the description, but with the thing being described. The frequentists' repetitions cannot be literal repetitions, else they would all have the same outcome. They must differ somehow. In what way and by how much should two things differ before a frequentist may say they are the same? I believe the answer is both obvious and circular. They must differ in a way which is random!

The mathematics of probability is easier on my head than the philosophy of probability.


PairTheBoard
Rationality in Newcomb's Paradox Quote
08-30-2009 , 08:52 AM
Quote:
Originally Posted by Aaron W.
Let me repeat my argument.

The reason for the "paradox" is because there are two contradicting assumptions:

1) The choice to accept/reject the $1K has no impact on the outcome of the $1M
2) The predictor will continue to be highly accurate (or even moderately accurate)

The first assumption defines a non-causal relationship between the decision and the outcome, and the second assumption defines a causal relationship between the decision and the outcome. You cannot be working with both assumptions at the same time. You are *EXPLICITLY* told to use assumption 1. However, any numerical analysis *REQUIRES* you to make assumption 2.

I've been arguing this position for quite a while:
Your statements contain contradictions.

Above, you say the assumption the predictor will continue to be moderately accurate defines a causal relationship between the decision and the outcome.

In post 383 you say,
"The "assumed" probability is irrelevant."

and

"he has been given a probability without causality. The choice is independent of the outcome"

In this post you say there exists a "tipping point" nonzero postive value of epsilon such that for an assumed continued accuracy of 1-epsilon it is True that you should Refuse.

"I am arguing that "you should refuse" is correct if you assume the future probability of the predictor is going to be highly accurate (assumption 2). It will be true under this assumption at some value that is a finite distance from 1. So in the limit (ie, past that tipping point) under the ASSUMPTION that the predictor will continue to be correct (ie, there exists causality), you should reject the money."

Yet in post 383 you say,

"The conversations I laid out would hold true regardless of the values whether they're 70% or 99.9%. The difference is merely emotional."

where in the conversations you assert that the assumed probability of an accurate prediction does not imply causality and you should Accept.


I'm afraid your position is simply not logically coherent or consistent.

PairTheBoard
Rationality in Newcomb's Paradox Quote

      
m