Open Side Menu Go to the Top
Register
**Official MOP Study Group Thread** Week I **Official MOP Study Group Thread** Week I

01-18-2008 , 05:51 AM
Hello Jerrod,
i really like your book and it's nice that you join the discussion.
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 09:51 AM
Quote:
Originally Posted by Jerrod Ankenman

EV_16900 = 16900 * 0.0115 = 194.35 bets
VAR_16900 = 16900 * (2.1)^2 = 74529 BB^2
SD_16900 = sqrt(VAR_16900) = 273

Now we can leave this here, so a 95% confidence interval is:
194.35 bb +/- 546 bb in 16900 hands.

Maybe you can look at that and intuitively grasp what it means, but I'd prefer to divide it by 169 to get a better handle on it (relative to what I know about win rates):

1.15 bb/100 +/- 3.23 bb/100 (PER 100 hands IN 16900 hands)
Just my 2 cent for what it means:

I think it means that the higher the sample the lower its relative dispersion around the empirical mean x. The standard deviation of the empirical mean is always square root (n) conversely proportional. If you want the standard deviation of the empirical mean devide in half, that is, to double the accuracy of the estimate than you need a four-fold sample size. If you want tenfold more accurancy then you need a hundredfold sample size. It is also true that sample values disperse less around its arithmetic mean x than around its true mean. This is the reason why variance estimator functions like in Excel use the multiplier n / (n -1).
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 12:10 PM
In connection with this example on Page 29f (AK vs. AQ) I would be interessted to learn a little bit more about the utilisation of statistical significance tests in order to quantify such problems. Since this is also a important matter which keeps us busy lately I think we should not ignore that here in this study group.

Is there a thread were such problems have already been discussed in a shortened and simple form?
**Official MOP Study Group Thread** Week I Quote
01-19-2008 , 09:46 AM
What is happening? Everybody ready to skip important questions and to jump to part II?

I have read now the first 44 pages and think I have understood everything; with the exception, that I do not know how on page 41 column p(A|B) and p(A|~B) have been calculated. Can anybody explain this and demonstrate an example for a row? Looks like a very complicated calculation.

Imho the first 44 pages on the basics in math could be easily 144 pages and more. There is a lot of room for another book, for additional considerations and certainly for discussions too. In order to be fair I have to express that the book follows another goal which is to find concepts that improve the poker player's win rate. And I hope this book keeps this promise on the next 338 pages.

Another aspect. The Bayes' Theorem. There are two examples explained up to page 44. Imho for a solid understanding of the Bayes' Theorem it is necessary to read additional material; at least the articles in wikipedia.

The second example which shows how to do some Bayesian statistics is usefull, because it shows some techniques how to use this concept. But the gain of perception for the win rate in poker is imho more or less useless.

I doubt that the a-priori population distribution's mean is -1BB/100 and I doubt that the underlying a-priori distribution is a normal distribution. Also this question gives room for further empirical investigations.

I will now start to read bozzer's "first class" article on the Bayes' Theorem, try to get a better understanding on how to use this concept and I am ready for part II.
**Official MOP Study Group Thread** Week I Quote
01-19-2008 , 10:51 AM
Quote:
Originally Posted by PureWinner
Just my 2 cent for what it means:

I think it means that the higher the sample the lower its relative dispersion around the empirical mean x. The standard deviation of the empirical mean is always square root (n) conversely proportional. If you want the standard deviation of the empirical mean devide in half, that is, to double the accuracy of the estimate than you need a four-fold sample size. If you want tenfold more accurancy then you need a hundredfold sample size. It is also true that sample values disperse less around its arithmetic mean x than around its true mean. This is the reason why variance estimator functions like in Excel use the multiplier n / (n -1).
Variance is linearly proportional to sample size.
Standard deviation is the square root of the variance.
Therefore s.d. is linearly proportional to the square root of the sample size.

A sample to used to estimate the variance. This sample estimate is biased.
n/(n-1) corrects the bias.
**Official MOP Study Group Thread** Week I Quote
01-19-2008 , 11:43 AM
Yes, this important knowledge. We try to make all things a little bit easier:

Summary of Probability Calculations:

Are A and B mutually exclusive?

If yes:
1. p(A and B) = 0 because (A and B) is impossible!
2. p(A or B) = p(A) + p(B)


If no:

1. p(A and B) > 0
2. p(A or B) = p(A) + p(B) - p(A and B)

3. Are A and B independent?

3a)If yes, then independent
(1) p(A|B) = p(A)
(2) p(A and B) = p(A)*p(B)

3ba)If no, then dependent (=conditional):
(1) p(A|B) = p(A and B) / p(B)
(2) p(A and B) = p(A)*p(B|A) = p(B)*p(A|B)


Bayes' Theorem
P(A|B) = p(B|A) * p(A) / p(B) (3.1) (can be restated like 3.2)

were:
p(A) is a-priori probability for event A (does not take into account any info about B).
p(B) is a-priori probability for event B (acts as a normalizing constant)
p(B|A) is conditional probability for event B given A

Last edited by PureWinner; 01-19-2008 at 11:49 AM.
**Official MOP Study Group Thread** Week I Quote
01-19-2008 , 12:08 PM
Bayes' theorem: posterior = likelihood * prior / normalizing constant

In words: the posterior probability is proportional to the product of the prior probability and the likelihood

In addition, the ratio P(B|A)/P(B) is sometimes called the standardized likelihood or normalized likelihood, so the theorem may also be paraphrased as

posterior = normalized likelihood * prior
**Official MOP Study Group Thread** Week I Quote
01-20-2008 , 01:17 AM
Well, I think this first week of the study group has gone fairly well. Just finished the section myself, it was a busy week for me. I found most of it pretty straight-forward, I tried not to get too hung up on the math where it caused me difficulty, although I think I understood pretty much everything anyway. I can't help but wonder if there isn't an easier way to express many of the formulas and terms; the math nomenclature seems to make it more confusing. What I would find more helpful would be more examples. I find I take in concepts a lot better if I see a few examples first-hand and can work through the math myself, so although there were just enough examples, you can never have too many!

There were a couple of things I found confusing right at the end. On page 42, applying the Bayes' theorem to the 5 BB/100 win rate provides results I found a little counter-intuitive. The second table, recalculated with a higher sample of 100,000 hands, of course has shifted the expected win rate much more towards the winning side - 65% chance of +1 BB/100 or better. However, if the math is right, I would think a fundamental flaw of the method is showing when the +3 BB/100 rate has dropped to 0.04%. If someone has a 5 BB/100 win rate over 100,000 hands, how likely is it that their chance of having an actual win rate of +3 BB/100 or better is only 0.04%? With a smaller sample, it was 11.03%?? Is there something I'm missing, or is this a flaw in the system?

The last thing is the difference between "classical" and "Bayesian" statisticians. Are they basically saying that the classical approach is to just use the distributions as they are, whereas the Bayesian approach is to take those same distributions, and then apply Bayes' theorem to those numbers?

I'll discuss this with BryanC, but I'm thinking we will start the next section thread tomorrow. Part II is 6 chapters and 51 pages, so my first thought is to separate into two weeks. This thread will remain open, and hopefully discussion will continue. I'm sure not everyone is at the same place, and I would encourage people who haven't started yet to go ahead and join in, it doesn't matter how "far behind" you are.
**Official MOP Study Group Thread** Week I Quote
01-20-2008 , 03:17 AM
Quote:
Originally Posted by Bobo Fett
There were a couple of things I found confusing right at the end. On page 42, applying the Bayes' theorem to the 5 BB/100 win rate provides results I found a little counter-intuitive. The second table, recalculated with a higher sample of 100,000 hands, of course has shifted the expected win rate much more towards the winning side - 65% chance of +1 BB/100 or better. However, if the math is right, I would think a fundamental flaw of the method is showing when the +3 BB/100 rate has dropped to 0.04%. If someone has a 5 BB/100 win rate over 100,000 hands, how likely is it that their chance of having an actual win rate of +3 BB/100 or better is only 0.04%? With a smaller sample, it was 11.03%?? Is there something I'm missing, or is this a flaw in the system?
Oh, this is confusing. The second table on page 42 is a recalculation of the case from page 41, where the guy has a 1.15 win rate, but with a bigger sample size. It's not a recalculation of the top table on page 42 where he has a 5 BB win rate. This isn't clear in the text. I'll put it on my list of things to update in future printings.

Quote:
Originally Posted by Bobo Fett
The last thing is the difference between "classical" and "Bayesian" statisticians. Are they basically saying that the classical approach is to just use the distributions as they are, whereas the Bayesian approach is to take those same distributions, and then apply Bayes' theorem to those numbers?
The frequentist vs Bayesian debate has a long history, which you can read about online if you are interested.

Basically, the disagreement has to do with what kinds of things have probabilities associated with them. If I say "the probability that my win rate is between 1.0 and 1.5 BB/100 when I play headsup against xyz bot," that is a meaningless statement to a frequentist, because things like that don't have probability - either my rate is between those two numbers or it isn't. Bayesians on the other hand are willing to assign probabilities to mostly any statement with the idea that probabilities reflect degrees of belief.

http://en.wikipedia.org/wiki/Probabi...nterpretations has a summary.

If you are a poker player, you should be a Bayesian and just ignore that frequentist stuff.
**Official MOP Study Group Thread** Week I Quote
01-20-2008 , 04:52 AM
Quote:
Originally Posted by Jerrod Ankenman
Oh, this is confusing. The second table on page 42 is a recalculation of the case from page 41, where the guy has a 1.15 win rate, but with a bigger sample size. It's not a recalculation of the top table on page 42 where he has a 5 BB win rate. This isn't clear in the text. I'll put it on my list of things to update in future printings.
Definitely confusing...all I see is "recalculate the above", which I of course took to mean the example immediately above. That being said, given your explanation, I feel a little silly I didn't think of that.

Quote:
Originally Posted by Jerrod Ankenman
The frequentist vs Bayesian debate has a long history, which you can read about online if you are interested.

Basically, the disagreement has to do with what kinds of things have probabilities associated with them. If I say "the probability that my win rate is between 1.0 and 1.5 BB/100 when I play headsup against xyz bot," that is a meaningless statement to a frequentist, because things like that don't have probability - either my rate is between those two numbers or it isn't. Bayesians on the other hand are willing to assign probabilities to mostly any statement with the idea that probabilities reflect degrees of belief.

http://en.wikipedia.org/wiki/Probabi...nterpretations has a summary.
Good, that's more or less what I took out of it.

Quote:
Originally Posted by Jerrod Ankenman
If you are a poker player, you should be a Bayesian and just ignore that frequentist stuff.
Agreed.
**Official MOP Study Group Thread** Week I Quote
01-20-2008 , 01:15 PM
Hello Jerrod,

please understand that I have a problem to describe certain things in English, because English is not my mother tongue. I try it also in this somewhat difficult case and this might lead to fact that my words sound not very convincingly. I hazard these consequence and try it anyhow.

Let me express this way. Math can be brutally intimidating. And like Jesus said it as a introductory word in your book: "If you think the math isn't important, you don't know the rigth math".

From my point of view, pages 13 to 44 in your book can only be understood properly if one reads the introduction of the book carefully. There is stated something why your book is different and what your goals are. I accept this consideration and I am curious what I can learn from the pages 47 to the end of your book. I have not yet read it and can in this stage not allow myself any comment thereover. However, I have realized already that the rest of your book will give us plenty to chew on.

I have only read the first 44 pages and I give you a picture of my expression about the basics as explained in your book. Your explanations are not bad in a certain sense because it gives an overview about basic conepts in the mathematics of poker (albeit according to my view sometimes to cursory). If somebody wants to skip it, that's no problem because he can also without this knowledge become a good poker player and perhaps try to understand also on this way certain things later in your book. If somebody wants to get a solid understanding of the basics of poker math, however, than I must recommend another book.

What I miss in your book and especially on the first 44 pages is the following:

1) A solid introduction into combinatorics and the calculation of poker probabilities
2) A solid discussion of other (than Gaussian) distributions as far as relevant for our poker play (especially Binomial and Poisson distribution)
3) A solid introduction in descriptive statistics as far as relevant for statistical conclusions including considerations about more practical things like "How to determine your hourly Standard Deviation".
4) A more complex introduction in classical statistics as far as relevant for our poker play.
5) In connection with your example on page 29 f (AK vs. AQ) a solid introduction in statistical significance tests like t-tests and other possible tests.

I recommend that you write a new book about these basics of poker math. I think it is easy to convince your publisher about the necessarity of doing this. If you will not undertake this job, perhaps somebody else should do it. I give here an overview about the flaws on the first 44 pages of your book.

Page 34/35: The correct 95% confidence intervall for this player's win rate in your example (based on the 16.900 hand sample) is

0,0115 BB/h - 1.96 (273/square root(16.900)); 0,0115 BB/h + 1.96 (273/square root(16.900))
0,0115 BB/h - 1.96 * 2,1; 0,0115 + 1,96 * 2,1
0,0115 BB/h - 4.116; 0,0115 + 4,116
-4.1045 BB/h; + 4.1275 BB/h

And now you can take a magnifying glass if you like.

The proper recommendation in this example is not to redefine the math or to use the Bayes' Theorem instead. The proper recommendation is the following:

1) The player should try to find a game or a stil of play where he can increase his win rate and/or
2) The player should try to find a game or a stil of play where he can reduce his variance or standard deviation
3) or to read your book completly and try to find further solutions.

The problem is in fact to find a optimal solution here.

If you want to make the same consideration for 100 hands. Than you have in your example nothing else than 169 samples with 100 hands each. This leads to a complete different distribution and you have to do the complete math again.

If you want to apply another significance level than you can use the follwing multipliers for the different confidence probabilities:

68,27% = 1
90,00% = 1.645
95,00% = 1.96
95,45% = 2
99,00% = 2.576
99,73% = 3
99.99% = 3.9
99.995% = 4.05
approx. 100 % = 4.5

If you want to use other confidence probabilities, than you can make a calculation with the normalized z-scores in the following way (I show here the example for 95%):

Statistic Table for normalized distributions (1.960) = 0.9750 = (1 + 0,95) / 2 <----> also called (greek letter) lambta 95% = 1.960

Page 35-43. The Bayes' Theorem is nothing else than a specific theorem in probability math witch relates to conditional and marginal probability distributions of random variables. It tells us in some interpretations how to update or revise beliefs in light of new evidence a posteriori. I do not doubt that this theorem is important in poker math, but I think before somebody applies this theorem he should understand other things first. And I deny without beeing a frequentist, that this theorem is the most important theorem in probability math in general and in poker math in particular.

Having said this, I think you will give us important examples later in your book where this theorem is useful. But I think each poker player should have a solid understanding of combinatorics too. Otherwise he could be a one-eyed guy in the land of the blinds. Your example on page 40 to 44 is nice because it explains what one can do with Bayesian Statistics.

However, in this definite case your assertions are imho worthless and are somewhat ambiguous. The hole example is lined with wrong parameters and therefore also your conclusion must be wrong. Anyhow, even in this case I think that a player should try to get a solid understanding of the basics in classical statistics first, before he tries to understand or can apply your example in practice. It is also important to say here, that a player can find with the aid of descriptive statistitics a much easier and intuitive way for splitting his poker results into win rates like you did it on pages 41 f. of your book.

I will not discourage other players to read your book. I think it is a valuable book for advanced and experienced poker players, but not for beginners who try to understand the basics in poker math. I think, however, you said this already with similar words on page 6 of your book.

McSeafield
**Official MOP Study Group Thread** Week I Quote
01-21-2008 , 02:06 AM
Quote:
Originally Posted by McSeafield
Page 34/35: The correct 95% confidence intervall for this player's win rate in your example (based on the 16.900 hand sample) is

0,0115 BB/h - 1.96 (273/square root(16.900)); 0,0115 BB/h + 1.96 (273/square root(16.900))
0,0115 BB/h - 1.96 * 2,1; 0,0115 + 1,96 * 2,1
0,0115 BB/h - 4.116; 0,0115 + 4,116
-4.1045 BB/h; + 4.1275 BB/h
Let me get this straight. Your contention is that if a player plays 16,900 hands and wins 194 bets with a standard deviation of 2.1 bb/h, you claim that a 95% confidence interval for his win rate is EIGHT BIG BETS PER HAND wide, such that his true rate could be losing 400 big bets per hundred hands and still not fail a 95% hypothesis test?

I don't really know how to respond to that.

Anyway, some of the criticisms you have about the first part of the book are somewhat valid; our goal here was not at all to write a statistics book, and so our book is certainly deficient as a primary source about statistics. I am certainly not going to write a statistics book for poker players, so anyone who feels like this is an important task should feel free to take it up right away.
**Official MOP Study Group Thread** Week I Quote
01-21-2008 , 07:32 AM
For 100 hands the 95% confidence intervall should be:

1.15 BB/100h - 1.96 (2.1 * square root(100)); 1,15 BB/100h + 1.96 (2.1 * square root(100))
-40.01 BB/100h; +42.31 BB/100h

The problem in your example is the standard deviation of 21 BB/100h (= 2.1 BB/h) which is in relation to the win rate of 1.15 BB/100h much to high.

While -1 BB/100 h win rate is probably a rough estimate of the population mean for all players it would be interesting to have a corresponding estimate for SD BB/100 h.
**Official MOP Study Group Thread** Week I Quote
01-31-2008 , 08:15 PM
I have doubts in the dependent events section.

The pitcher has a 3% of pitching nine innings and the team a 60% of winning the match

So the probability of the team winning the game given the pitcher accomplished that is nearly 100% or nearly 3%??
**Official MOP Study Group Thread** Week I Quote
03-19-2009 , 07:11 AM
Okay I am digging up an old thread here but I am kind of interested as to how this just stopped??? Also I would really like to understand how the calculations were reached on the chart of Pg 41.I don't know if it is just late and I'm missing something but I cannot seem to find a way to get the calculations needed for the chart... If anyone can help me please?
**Official MOP Study Group Thread** Week I Quote

      
m