Open Side Menu Go to the Top
Register
Mistakes in Mathematics of Poker? Mistakes in Mathematics of Poker?

12-02-2013 , 02:14 AM
So I decided to pick this up again and see if I could learn some new ideas to improve my understanding of the game and was surprised to see what looks like a pretty egregious error on page 48 under Example 4.1. Here the authors state that Player A either has the nuts (probability 0.2) or nothing (probability 0.8), and then they calculate an expectation which is supposed to involve the conditional probabilities of Player A bluffing or value betting given that he's made a bet. The problem is they use the same 0.2 figure as the conditional probability that he has the nuts given that he's made a bet when this is really the prior probability (before any betting) that he has the nuts. There's no reason why these numbers would be the same (e.g., if Player A never bluffs then the conditional probability he has the nuts is 1 and Player B's expected loss from calling is -1 and not -0.2 like the authors claim), and they end up with probabilities that don't even add up to one in general. Are there any other known mistakes in this book?
Mistakes in Mathematics of Poker? Quote
12-02-2013 , 06:18 AM
Quote:
Originally Posted by d_saxton
So I decided to pick this up again and see if I could learn some new ideas to improve my understanding of the game and was surprised to see what looks like a pretty egregious error on page 48 under Example 4.1. Here the authors state that Player A either has the nuts (probability 0.2) or nothing (probability 0.8), and then they calculate an expectation which is supposed to involve the conditional probabilities of Player A bluffing or value betting given that he's made a bet. The problem is they use the same 0.2 figure as the conditional probability that he has the nuts given that he's made a bet when this is really the prior probability (before any betting) that he has the nuts. There's no reason why these numbers would be the same (e.g., if Player A never bluffs then the conditional probability he has the nuts is 1 and Player B's expected loss from calling is -1 and not -0.2 like the authors claim), and they end up with probabilities that don't even add up to one in general. Are there any other known mistakes in this book?
tl;dr

But from Errata for The Mathematics of Poker, First Edition there is the following concerning the section you are referring to:
Quote:
p 48:
After the sentence

... when A has a bluff.

(near the middle of the page)

insert:

B's calling strategy only applies when A bets, so the probability values below are
conditional on A betting. (2/16/07)
Hopefully, this clears up it up for you. In any case, you should find the link given above useful.
Mistakes in Mathematics of Poker? Quote
12-03-2013 , 01:02 AM
Quote:
Originally Posted by R Gibert
tl;dr

But from Errata for The Mathematics of Poker, First Edition there is the following concerning the section you are referring to:

Hopefully, this clears up it up for you. In any case, you should find the link given above useful.
It's still not right, but thanks for the link.
Mistakes in Mathematics of Poker? Quote
12-03-2013 , 05:28 AM
You may want to look at this thread on MOP. Then if you perform a thread search for "page 48" you will find this. If that isn't relevant, it is probably a better spot to ask your question.
Mistakes in Mathematics of Poker? Quote
12-05-2013 , 08:28 PM
I've found what seems like another issue. The game is "half street" poker where Player X is dealt a specific hand (it does not change), Player Y is dealt a hand which is better 1/2 the time and worse 1/2 the time, and we start with a pot size of P. Player X checks and Player Y then has the option to check or bet one unit. If he checks, there's a showdown, and if he bets Player X can call or fold.

The idea is to find how often Player Y should bluff to make Player X indifferent to calling. They call the probability that Player Y is bluffing given that he's bet b, and so Player X's expectation from calling when facing a bet is b * (P + 1) - (1 - b) (he wins the pot plus Player Y's bet when he's bluffing, otherwise he loses his call). Setting this to zero and solving for b gives b = 1 / (P + 2), and not 1 / (P + 1) like they have. Their solution also says that (for example) if P = 0 then b = 1, which means that Player Y should always be bluffing when he bets, which makes no sense. Am I missing something?
Mistakes in Mathematics of Poker? Quote
12-05-2013 , 09:41 PM
You are right on both counts. In the 2nd example b=1/2 which is intuitively correct - If villain always folds we win 0. If villain always calls we win 1 half the time and lose 1 half the time. And so forth.

In the version I have, they wrote this:
1 = (P+2)b
They should have written
1*(1-b) = (P+2)b
Mistakes in Mathematics of Poker? Quote
12-07-2013 , 05:39 PM
Another problem. In the risk of ruin section, a risk function R is defined where R(x) is the probability of going broke starting with a bankroll of x. (We assume that the gambler's fortune is determined by the partial sums of a sequence of independent and identically distributed random variables X_1, X_2, X_3, ..., so that after n plays his fortune is x + X_1 + X_2 + ... + X_n, and this continues until he goes broke.)

On pg. 282 they state a property of the risk function which is that R(a + b) = R(a)R(b), which comes from the idea that being ruined with a bankroll of a + b is the same as being ruined with a bankroll of a, then ruined with a bankroll of b, and these events are independent. First, I think this is only approximately true, because the gambler's fortune takes discrete jumps, so the event where he is ruined with a bankroll a + b is actually not the same as being ruined with a bankroll of a, then starting over and being ruined with a bankroll of b, because he would be recovering for free whatever amount he was in the hole after being ruined the first time. So, the right-hand side is actually smaller than the left because of this free money. In order for this relation to hold, I think one needs to assume that the gambler's fortune is a *continuous* process rather than a discrete one. But in any case, I think there's another problem when he goes to apply this formula, which is that we've implicitly assumed that a, b > 0. For instance, for any a > 0 consider that 1 = R(0) = R(a - a) = R(a)R(-a) = R(a) * 1, and therefore R(a) = 1 for all a, which is obviously wrong.

Now the authors use this property a couple pages later to say that R(a) = E[R(a | X)] (conditioning on the next event and taking an expectation) = E[R(a + X)] = E[R(a)R(X)] = R(a) E[R(X)], which implies (so long as R(a) > 0) that E[R(X)] = 1. The problem is that X can take on negative values, and we've applied a property that doesn't necessarily hold in this situation. Furthermore, we've used it to deduce something that is apparently not true, which is that R(X) = 1 (R(X) is a random variable between 0 and 1, so if its expectation is 1, then it must be equal to 1 with probability 1), which would mean that for *any* distribution, if we start off with a bankroll equal to our winnings after one play, then we have to go broke. This is false, because suppose we take X_i = -1 with probability 1/3 and X_i = 1 with probability 2/3. Then the gambler's fortune is just a random walk for which it's known that there is positive probability of always being positive, so it is not true that the gambler has to be ruined.

Anyways, this post is already too long, and I haven't read through the full section in the book, but does anybody know anything else about this? Have I messed up somewhere?

Last edited by d_saxton; 12-07-2013 at 05:44 PM.
Mistakes in Mathematics of Poker? Quote
12-11-2013 , 12:27 PM
Quote:
Originally Posted by d_saxton
Another problem. In the risk of ruin section, a risk function R is defined where R(x) is the probability of going broke starting with a bankroll of x. (We assume that the gambler's fortune is determined by the partial sums of a sequence of independent and identically distributed random variables X_1, X_2, X_3, ..., so that after n plays his fortune is x + X_1 + X_2 + ... + X_n, and this continues until he goes broke.)
Quote:
Originally Posted by d_saxton
On pg. 282 they state a property of the risk function which is that R(a + b) = R(a)R(b), which comes from the idea that being ruined with a bankroll of a + b is the same as being ruined with a bankroll of a, then ruined with a bankroll of b, and these events are independent. First, I think this is only approximately true, because the gambler's fortune takes discrete jumps, so the event where he is ruined with a bankroll a + b is actually not the same as being ruined with a bankroll of a, then starting over and being ruined with a bankroll of b, because he would be recovering for free whatever amount he was in the hole after being ruined the first time. So, the right-hand side is actually smaller than the left because of this free money. In order for this relation to hold, I think one needs to assume that the gambler's fortune is a *continuous* process rather than a discrete one.
You are right that there is some funny definitional stuff surrounding what being "ruined" is when you have a fixed bet size and the distribution of outcomes can lead to you having a non-integer multiple of that fixed bet size. (Like in blackjack with 3:2 payoffs) Also there is some funny stuff (as in poker) where your distribution of outcomes changes because you are shorter-stacked, all-in for less, other people at the table have more money etc. We chose to gloss over some of those definitional things and sort of treat the situation in the way that most closely conforms to the way we treat bankrolls in practice, which is: the R(x) we are considering are for x >> 1. So in that sense, the details of going broke from a starting bankroll of 1 unit in the types of situations we are mostly interested in are unimportant and the R(1) in the formulas could be reasonable viewed as an abstracted R(1). So while it is not precisely true that R(2) = R(1)^2, it is an extremely close approximation to say that R(200) = R(100)^2, and that's what really matters when looking at risk of ruin. But technically you are right.

(there's even a short paragraph about this on p282)

Quote:
Originally Posted by d_saxton
But in any case, I think there's another problem when he goes to apply this formula, which is that we've implicitly assumed that a, b > 0. For instance, for any a > 0 consider that 1 = R(0) = R(a - a) = R(a)R(-a) = R(a) * 1, and therefore R(a) = 1 for all a, which is obviously wrong.
OK, there is some definitional looseness about this, which can be rectified easily. Let d(x) be the distribution of outcomes. If necessary, truncate the left tail of d such that it doesn't exceed the current bankroll. Define z to be the smallest value of the bankroll after one play.

Now if b is positive, the argument in the text gives us the log-linearity we need to define R as an exponential.
If b is negative, write R(a+b) = R(a+b+z-z). Then z >=0 and a+b-z>=0, so this is R(z)R(a+b-z). Since b-z is positive, R(z) = zR(1) and R(a+b-z)=(a+b-z)R(1), and the same log-linearity argument shows that R(a+b) is an exponential with the same constant as when b was positive.

Then there's no problem when we apply the law of total probability to the known exponential R(a+b) by conditioning on one draw from the outcome distribution. R remains undefined for negative bankrolls and happy times reign.
Mistakes in Mathematics of Poker? Quote
12-11-2013 , 12:36 PM
Quote:
Originally Posted by d_saxton
I've found what seems like another issue. The game is "half street" poker where Player X is dealt a specific hand (it does not change), Player Y is dealt a hand which is better 1/2 the time and worse 1/2 the time, and we start with a pot size of P. Player X checks and Player Y then has the option to check or bet one unit. If he checks, there's a showdown, and if he bets Player X can call or fold.

The idea is to find how often Player Y should bluff to make Player X indifferent to calling. They call the probability that Player Y is bluffing given that he's bet b, and so Player X's expectation from calling when facing a bet is b * (P + 1) - (1 - b) (he wins the pot plus Player Y's bet when he's bluffing, otherwise he loses his call). Setting this to zero and solving for b gives b = 1 / (P + 2), and not 1 / (P + 1) like they have. Their solution also says that (for example) if P = 0 then b = 1, which means that Player Y should always be bluffing when he bets, which makes no sense. Am I missing something?
On p112, we write: "Likewise, Y will bluff with some fraction of his dead hands. We will call this fraction b."

So the thing that you are calling b "the probability we are bluffing given that we bet" is b/(b+1). Then if we say b=(1/(p+1)), then your b would be:
(1/(p+1))/(1/(p+1)+1) = 1/(1+(p+1)) = 1/(p+2) and we all agree.
Mistakes in Mathematics of Poker? Quote
12-12-2013 , 08:01 AM
Quote:
Originally Posted by Jerrod Ankenman
You are right that there is some funny definitional stuff surrounding what being "ruined" is when you have a fixed bet size and the distribution of outcomes can lead to you having a non-integer multiple of that fixed bet size. (Like in blackjack with 3:2 payoffs) Also there is some funny stuff (as in poker) where your distribution of outcomes changes because you are shorter-stacked, all-in for less, other people at the table have more money etc. We chose to gloss over some of those definitional things and sort of treat the situation in the way that most closely conforms to the way we treat bankrolls in practice, which is: the R(x) we are considering are for x >> 1. So in that sense, the details of going broke from a starting bankroll of 1 unit in the types of situations we are mostly interested in are unimportant and the R(1) in the formulas could be reasonable viewed as an abstracted R(1). So while it is not precisely true that R(2) = R(1)^2, it is an extremely close approximation to say that R(200) = R(100)^2, and that's what really matters when looking at risk of ruin. But technically you are right.

(there's even a short paragraph about this on p282)



OK, there is some definitional looseness about this, which can be rectified easily. Let d(x) be the distribution of outcomes. If necessary, truncate the left tail of d such that it doesn't exceed the current bankroll. Define z to be the smallest value of the bankroll after one play.

Now if b is positive, the argument in the text gives us the log-linearity we need to define R as an exponential.
If b is negative, write R(a+b) = R(a+b+z-z). Then z >=0 and a+b-z>=0, so this is R(z)R(a+b-z). Since b-z is positive, R(z) = zR(1) and R(a+b-z)=(a+b-z)R(1), and the same log-linearity argument shows that R(a+b) is an exponential with the same constant as when b was positive.

Then there's no problem when we apply the law of total probability to the known exponential R(a+b) by conditioning on one draw from the outcome distribution. R remains undefined for negative bankrolls and happy times reign.
Thanks for taking the time to respond. I'm not sure I follow exactly what you mean by the constant "z," though. One can of course take a positive number and express it as a sum of positive numbers, but I think the issue is that we're trying to write R(a + X) = R(a)R(X). However, if X can take on negative values (which it must if the problem is nontrivial), then this doesn't seem to work.

The point is that if we look at equation 22.2, the claim we ultimately get is that E[exp(- alpha * X)] = 1, which given the definition of alpha is another way of saying E[R(X)] = 1. However, this *cannot* be true (it implies R(X) = 1 with probability 1, which would mean one must go broke if starting with a bankroll equal to one's profit after a single play), and is in fact contradicted by property 4 from a previous page. (Property 4 states that R(b) < 1 for every b whenever E(X) > 0. So if X is a random variable such that E(X) > 0, then R(X) < 1 with positive probability and E[R(X)] < 1.) The argument therefore must be breaking down somewhere, and I believe it is this problem of plugging negative numbers into the risk function.
Mistakes in Mathematics of Poker? Quote
12-12-2013 , 08:02 AM
Quote:
Originally Posted by Jerrod Ankenman
On p112, we write: "Likewise, Y will bluff with some fraction of his dead hands. We will call this fraction b."

So the thing that you are calling b "the probability we are bluffing given that we bet" is b/(b+1). Then if we say b=(1/(p+1)), then your b would be:
(1/(p+1))/(1/(p+1)+1) = 1/(1+(p+1)) = 1/(p+2) and we all agree.
Well, if he's bluffing with some fraction b of his dead hands, then this *is* the conditional probability that he's bluffing given that he has a dead hand (assuming all hands are equally likely).
Mistakes in Mathematics of Poker? Quote
12-12-2013 , 11:17 AM
Quote:
Originally Posted by d_saxton
Well, if he's bluffing with some fraction b of his dead hands, then this *is* the conditional probability that he's bluffing given that he has a dead hand (assuming all hands are equally likely).
Originally, you said that b was "the probability that he is bluffing given that he bet." Now you are saying "the conditional probability that he's bluffing given that he has a dead hand." The second definition is the one we gave in the book for b, and it's 1/(p+1). The first one is a reasonable way of looking at the problem as well, and if you solve the slightly different equations for the different definition of b, then you get 1/(p+2). Once you have one of these, it's easy to find the other.
Mistakes in Mathematics of Poker? Quote
12-12-2013 , 11:36 AM
Quote:
Originally Posted by d_saxton
Thanks for taking the time to respond. I'm not sure I follow exactly what you mean by the constant "z," though. One can of course take a positive number and express it as a sum of positive numbers, but I think the issue is that we're trying to write R(a + X) = R(a)R(X). However, if X can take on negative values (which it must if the problem is nontrivial), then this doesn't seem to work.

The point is that if we look at equation 22.2, the claim we ultimately get is that E[exp(- alpha * X)] = 1, which given the definition of alpha is another way of saying E[R(X)] = 1. However, this *cannot* be true (it implies R(X) = 1 with probability 1, which would mean one must go broke if starting with a bankroll equal to one's profit after a single play), and is in fact contradicted by property 4 from a previous page. (Property 4 states that R(b) < 1 for every b whenever E(X) > 0. So if X is a random variable such that E(X) > 0, then R(X) < 1 with positive probability and E[R(X)] < 1.) The argument therefore must be breaking down somewhere, and I believe it is this problem of plugging negative numbers into the risk function.
The claim that R(a+b) = R(a)R(b) doesn't hold when b is negative, because R(b) is undefined for negative b. I mean, maybe it seems natural to extend R to be 1 on all negative values, but if you do this, then the multiplicative property fails.

However, the claim that R(a+b) = exp(c(a+b)) for some constant c depending on the distribution of outcomes does hold for negative b, as long as a+b >= 0. The reason this is true (and the point of introducing z, a typical trick) is that if a+b is nonnegative, then (a+b) can be rewritten as the sum of two non-negative numbers (z and a+b-z), for which the multiplicative property holds. Since it is easy to show that R is exponential for nonnegative numbers, it follows that R(a+b) = exp(cz)exp(c(a+b-z)) = exp(c(a+b)) is likewise exponential. The only necessary condition introduced is that a+b is nonnegative (that is, you can't lose more than your entire bankroll on a single play).
Mistakes in Mathematics of Poker? Quote
12-12-2013 , 04:08 PM
Yes, sorry, I was using b to denote the conditional probability of him having a dead hand given that he's bet.

I'm fine with defining the risk of ruin function as an exponential for positive arguments, it's just the key formula E[R(X)] = 1 that I find problematic. I may need to think a bit more about where the issue lies.
Mistakes in Mathematics of Poker? Quote
12-17-2013 , 03:13 AM
Quote:
Originally Posted by Jerrod Ankenman
The claim that R(a+b) = R(a)R(b) doesn't hold when b is negative, because R(b) is undefined for negative b. I mean, maybe it seems natural to extend R to be 1 on all negative values, but if you do this, then the multiplicative property fails.

However, the claim that R(a+b) = exp(c(a+b)) for some constant c depending on the distribution of outcomes does hold for negative b, as long as a+b >= 0. The reason this is true (and the point of introducing z, a typical trick) is that if a+b is nonnegative, then (a+b) can be rewritten as the sum of two non-negative numbers (z and a+b-z), for which the multiplicative property holds. Since it is easy to show that R is exponential for nonnegative numbers, it follows that R(a+b) = exp(cz)exp(c(a+b-z)) = exp(c(a+b)) is likewise exponential. The only necessary condition introduced is that a+b is nonnegative (that is, you can't lose more than your entire bankroll on a single play).
I think the confusion for me was with the equation E[exp(- alpha * X)] = 1, where alpha = - log R(1), since although exp(- alpha * t) = R(t) (more or less) for t >= 0, this is not true when t < 0. (It's equal to R(1)^t, which cannot be written as R(t), since it makes no sense to think of the event of independently losing t bankrolls with unit size.) So the equation is not actually claiming that E[R(X)] = 1, which is clearly not correct, but only that E[exp(- alpha * X)] = 1. This is not a problem since exp(- alpha * X) > 1 when X < 0, unlike R(X).

On a completely different subject, I've been studying the von Neumann (0, 1) poker model (described in this paper as well as Mathematics of Poker) and have noticed something strange. If we have both players ante one unit so that P = 2 and have the bet size equal to one unit, then Player B is calling with a range that includes hands which can only beat a bluff, but Player A is only bluffing 1/5 of the time given that he's bet. (The thresholds are Player B bluffs approximately with the bottom 6% of hands and value bets the top 24% of hands. Player A calls with approximately the top 56% of his range.) According to standard poker logic, Player A shouldn't be calling with any hand that can only beat a bluff since he's only getting 2 to 1 odds. Any ideas as to what is going on here?
Mistakes in Mathematics of Poker? Quote
12-17-2013 , 05:08 AM
Correction: 2 to 1 odds should be 3 to 1 odds in the above post.
Mistakes in Mathematics of Poker? Quote
12-17-2013 , 03:21 PM
Quote:
Originally Posted by d_saxton
I think the confusion for me was with the equation E[exp(- alpha * X)] = 1, where alpha = - log R(1), since although exp(- alpha * t) = R(t) (more or less) for t >= 0, this is not true when t < 0. (It's equal to R(1)^t, which cannot be written as R(t), since it makes no sense to think of the event of independently losing t bankrolls with unit size.) So the equation is not actually claiming that E[R(X)] = 1, which is clearly not correct, but only that E[exp(- alpha * X)] = 1. This is not a problem since exp(- alpha * X) > 1 when X < 0, unlike R(X).

On a completely different subject, I've been studying the von Neumann (0, 1) poker model (described in this paper as well as Mathematics of Poker) and have noticed something strange. If we have both players ante one unit so that P = 2 and have the bet size equal to one unit, then Player B is calling with a range that includes hands which can only beat a bluff, but Player A is only bluffing 1/5 of the time given that he's bet. (The thresholds are Player B bluffs approximately with the bottom 6% of hands and value bets the top 24% of hands. Player A calls with approximately the top 56% of his range.) According to standard poker logic, Player A shouldn't be calling with any hand that can only beat a bluff since he's only getting 2 to 1 odds. Any ideas as to what is going on here?
I don't think those are the right thresholds for that game. Using the formulas on the first page of Tom's paper, we have B=1, so a = 1/10 and b = 7/10. Then the caller is getting exactly three to one on his calls, and everything is hunky dory.
Mistakes in Mathematics of Poker? Quote
12-17-2013 , 05:36 PM
Quote:
Originally Posted by Jerrod Ankenman
I don't think those are the right thresholds for that game. Using the formulas on the first page of Tom's paper, we have B=1, so a = 1/10 and b = 7/10. Then the caller is getting exactly three to one on his calls, and everything is hunky dory.
Yes, that's right. I had been simulating it in R and forgot that I had changed the value of B the last time I'd run it.
Mistakes in Mathematics of Poker? Quote
12-23-2017 , 04:02 PM
hi, at page 53 it says the formula for calculating our EV of calling flop with a drawing hand with 8/45 outs and a 30$ bet in a 135$ pot is :

p(Bf)(135$+2(30$)) - 30$

= 4.67$

I think that's a mistake because I think 8/45(135$-2(30$))-30$ and the answer is 29.333$.

Am i right ?
Mistakes in Mathematics of Poker? Quote
12-23-2017 , 06:08 PM
Quote:
Originally Posted by Kingkong352
hi, at page 53 it says the formula for calculating our EV of calling flop with a drawing hand with 8/45 outs and a 30$ bet in a 135$ pot is :

p(Bf)(135$+2(30$)) - 30$

= 4.67$

I think that's a mistake because I think 8/45(135$-2(30$))-30$ and the answer is 29.333$.

Am i right ?
Nope. Both your equation and the arithmetic are wrong.
Mistakes in Mathematics of Poker? Quote
12-23-2017 , 08:39 PM
Ok would you be kind enough to explain how to do the equation and the arithmetic please ?
Mistakes in Mathematics of Poker? Quote
12-23-2017 , 09:06 PM
Ok I found it. Thanks
Mistakes in Mathematics of Poker? Quote

      
m