Open Side Menu Go to the Top
Register
**Official MOP Study Group Thread** Week I **Official MOP Study Group Thread** Week I

01-16-2008 , 10:02 AM
Quote:
Originally Posted by ygunstah
So it is not hard to see that the intrinsic statistics of the test are not the sole consideration. The main point is that the statistical distribution for whatever problem you're considering is overlayed on top of the population distribution.
THANK YOU. That simplifies it right there.

THE HUN.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 10:13 AM
Quote:
Originally Posted by bozzer
I wrote a post about Bayes theorum and handreading a while back.

It explains it the way I intuitively understand it.

Awesome post and thanks for the link. Just amazing stuff.

THE HUN.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 03:07 PM
http://archives1.twoplustwo.com/show...0#Post10933141

My next question relates to your thread. Assuming that through Bayes Theory we have deduced that the villain will have a set 36% of the time, what's next? What do we do with that information? Do we simply say, well, it's less than fifty, so I'll call? How do we use the information garnered through Bayes' effectively?

THE HUN.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 03:47 PM
Quote:
To calculate the overall chance of hitting at least one 3, you could calculate the probability of each outcome, and then add them up. This would be correct, but cumbersome. Cumbersome, but not wrong.
lol thats exactly what I thought I would have to do, I was struggling to see an easier way. So thanks alot for pointing me in the right direction, appreciate it
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 04:09 PM
Quote:
Originally Posted by thehun69
http://archives1.twoplustwo.com/show...0#Post10933141

My next question relates to your thread. Assuming that through Bayes Theory we have deduced that the villain will have a set 36% of the time, what's next? What do we do with that information? Do we simply say, well, it's less than fifty, so I'll call? How do we use the information garnered through Bayes' effectively?

THE HUN.
You're getting ahead of the material.

Bayesian inference. pp38-9.

This explains how to use Bayes' Theorem.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 04:14 PM
Quote:
Originally Posted by thehun69
I was skipping ahead, so I'll wait for my Bayes Theory question for distributions for the next section.

THE HUN.
Quote:
Originally Posted by jogsxyz
You're getting ahead of the material.

Bayesian inference. pp38-9.

This explains how to use Bayes' Theorem.
Just to clarify, as there might be some confusion...the plan for this week was to cover all of Part 1 (Chapters 1-3), which covers up to page 44.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 07:00 PM
Do you know the difference between conditional and unconditional probabilities? Conditional probability is barely mentioned on p 15.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 08:46 PM
thanks for the responses, they're definitely appreciated, looking forward to following this thread
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 09:15 PM
Quote:
Originally Posted by jogsxyz
Do you know the difference between conditional and unconditional probabilities? Conditional probability is barely mentioned on p 15.
Conditional probability says that event A will happen if event B happens.

Flipping a coin has to possible outcomes (B), and one can land on heads (A)

Thus A/B

Or children (B) is child (A) is girl

Thus A/B

There is no such thing as an unconditional probability.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 09:42 PM
Quote:
Originally Posted by daveT

There is no such thing as an unconditional probability.
Most poker books neglect to discuss it. Doesn't mean it doesn't exist.
Stands to reason if there's a conditional, there must be a unconditional.

http://www.investopedia.com/terms/u/...robability.asp

Unconditional: What's the probability of you being dealt aces?
Conditional: What's the probability of opener being dealt aces?
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 10:29 PM
Quote:
Originally Posted by thehun69
http://archives1.twoplustwo.com/show...0#Post10933141

My next question relates to your thread. Assuming that through Bayes Theory we have deduced that the villain will have a set 36% of the time, what's next? What do we do with that information? Do we simply say, well, it's less than fifty, so I'll call? How do we use the information garnered through Bayes' effectively?

THE HUN.
First of all, the 35% number is completely wrong. The base chance of flopping a set is 12%, not 11%. More importantly, he ignored card removal: you're holding a queen. His chance of flopping a set here is close to 8%. Also, if he doesn't have a set, you're still a serious underdog to AA or KK. I can't be bothered to do the math, but would estimate your chances of having the worst of it are a bit south of 30%.

How do we play it? It's an easy call, against this goofy opponent or an "optimal" opponent. You're a favorite. As written, this exercise doesn't exactly highlight the usefulness of Bayes.

But what if the stacks are large? Well, that's multistreet poker. With deep stacks, a raise is indicated, probably with some (low) mix of calls for balance. The essential answer to "how do we use the information garnered through Bayes" is: You adjust your optimal strategy to exploit the information. Against this opponent, you're a bit better than a 2:1 favorite, and if you're behind you have basically 2 outs. Against an "optimal" opponent, you're a bigger favorite and have perhaps 5 outs. With deep stacks, Bayes is telling you to pursue this situation aggressively, but not as aggressively as you would otherwise.

ygunstah
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 10:34 PM
Not sure how that happened, ignore (or someone please kill) the first draft version.
**Official MOP Study Group Thread** Week I Quote
01-16-2008 , 10:41 PM
Quote:
Originally Posted by daveT
Conditional probability says that event A will happen if event B happens.
This is a little misleading. Let's take an example: Random variable X has the values (A=opponent has AA, ~A=opponent doesn't have AA), random variable Y has the values (R=opponent raises UTG, ~R=opponent doesn't raise UTG). Then the unconditional probability p(A) is 1/221, and the unconditional probability p(R) is given by opponent's UTG PFR.

But of course, if someone raises UTG, the chances of him having AA are way bigger than 0.5%, we all know that. The conditional probability of an UTG opponent having AA, given that we have just seen him raise, is given by the number of possible AA combos divided by the number of combos in opponent's UTG raising range.

p(A|R) != p(A)

What we, holding KK in the big blind, seeing UTG raise, are interested in, is p(A|R). Normally we don't have this number directly. But we often have a lot of p(R|A) and p(R|~A), plus the unconditional probabilities p(A) and p(R). Bayes Theorem allows us to calculate p(A|R) from them.

Quote:
There is no such thing as an unconditional probability.
Although i agree with you, let me state that this is an ideologically loaded question. Frequentists would probably vehemently disagree, unconditionally.
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 12:39 AM
Yep, the expert was faster than I. But here comes still the easiest example:

Think of an urn with 3 white and 2 black balls. From this urn two balls are picked out by chance in the following two different ways.
a) two balls are taken out at the same time (Pull without putting back)
b) one ball is taken out, putted back and then another ball is picked out (Pull with putting back)

In case a) you have conditional probabilities and dependent events, because the probability for the second ball is dependent from the first ball and vice versa (like with the hole cards in Texas Hold'em). In case b) you have independent events and if you like the opposite of "conditional". To understand this difference is one of the most important basics in probability math.

I am a little bit late with reading. I just finished chapter 1 and I wonder why the important art of probability calculations with the help of binomial coefficients (as shown a litte bit on page 93ff in this Introduction to Probability Script) is not demonstrated in MOP.

I think this is a very important concept to calculate probabilities of card distributions. If you look on the website of Brian Alsbach then you find a number of examples and you will note very soon why this is important. I think all beginners should practice with binomial coefficient calculations. This really belongs imho to the most important basics too.

I make only one example without any explanation:
You have Pocket 3s and want to know the probabilty for getting at least a set on the flop:

I calulate this probability as easy as follows:

I will use the notation BiC(x ;y) for bionomial coefficient x over y

Probability getting at least a set with flop = 1 - (BiC(48 ;3) / BiC(50; 3)) = 1 - (48 * 47 * 46 / 50 * 49 * 48) = 0.11755 = 11.7%

If somebody does not understand this calculation, please try to read first Page 93ff in this abovementioned script. You can make this calculation still easier. That is true. But there are a lot of situations, where you have no chance to calculate anything if you do not understand binomial coefficients.

Last edited by McSeafield; 01-17-2008 at 12:53 AM.
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 01:58 AM
P(A). Probability of an event A occurring.

Also a second definition.

P(A). Unconditional probability of an event A occurring not contingent on any prior event.
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 05:43 AM
Quote:
Originally Posted by ygunstah
First of all, the 35% number is completely wrong. The base chance of flopping a set is 12%, not 11%. More importantly, he ignored card removal: you're holding a queen. His chance of flopping a set here is close to 8%. Also, if he doesn't have a set, you're still a serious underdog to AA or KK. I can't be bothered to do the math, but would estimate your chances of having the worst of it are a bit south of 30%.

the example was a simplifed example where villain only has a set or an underpair. he never has AA, KK here and i don't understand what card-removal effects have to do with anything.

to answer the original question, it tells us that we are winning 35% of the time so we assess that with respect to our pot odds and decide if we have odds to call.

An easier way to do this stuff, assuming you know how often he takes various lines is to stove it:

Results from http:\\www.HoldEmRanger.com
641,520 evaluations, 648 hole card combos

Board: Qh 9s 4d

Wins Ties Equity
64.67% 0.00% 64.67% ( AQ )
35.33% 0.00% 35.33% ( 22-33(15),55-88(15),TT-JJ(15),44(70),99(70),JJ(70) )

sorry to derail the thread slightly, i thought that was worth clearing up.

btw, an 'unconditional' probaibility is a probability that does not depend on anything else, but the term isn't used much as far as i know.
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 09:10 AM
Quote:
Originally Posted by bozzer
the example was a simplifed example where villain only has a set or an underpair. he never has AA, KK here and i don't understand what card-removal effects have to do with anything.
I didn't notice you'd restricted him to JJ or lower. I gave you the full solution where he plays any pair.

Anyway, card removal is slightly more important for this case; you've eliminated the Q entirely now. He only has two chances with that flop, not three. Think about the extreme case. What if the flop were AKQ? Then you *know* he's bluffing. You're a third of the way there with this flop.

Quote:
Originally Posted by bozzer
to answer the original question, it tells us that we are winning 35% of the time.
Your *losing* 35% (less, in fact). You're way out in front.

Quote:
Originally Posted by bozzer
Board: Qh 9s 4d

Wins Ties Equity
64.67% 0.00% 64.67% ( AQ )
35.33% 0.00% 35.33% ( 22-33(15),55-88(15),TT-JJ(15),44(70),99(70),JJ(70) )
Not sure what you mean here. AQ is an 88% favorite against JJ, better against small pairs.

Anyway, none of this is worth litigating really, because the exercise needs tweaking to make Bayes swing the decision, or what's the point?

ygunstah
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 09:37 AM
One last way to think about that exercise. With AQ and that flop, you have 81% showdown against [JJ-22]. Against even an optimal player, it's a mandatory call at 2:1.

For extra credit: how much of the time should an optimal player bluff at that flop? What's your EV calling against an optimal player? How about against the goofus trips bluffer guy?

ygunstah
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 09:48 AM
So, after using Bayes, we deduce that the likelihood of him having a set is now about 36% and then we reconcile that figure with the pot odds that we are currently being offered to see if it warrants a call. I get it now.

This is really an awesome tool as it gives a great mathematical framework for logic. Again, that link was fantastic. So, essentially, as we continue to play and gain more knowledge of our villain's tendencies, then we simply plug in that new data in, creating new Bayes every time, and we then act accordingly to that adjusted number.

THE HUN.
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 03:55 PM
Quote:
Originally Posted by ygunstah
Your *losing* 35% (less, in fact). You're way out in front.
sorry, yes, he has a set 35% of the time so we are winning 65%.

Quote:
Not sure what you mean here. AQ is an 88% favorite against JJ, better against small pairs.
This is an equity against a RANGE, the numbers in brackets are the frequency he takes this line with this hands (as specified in the OP).

Quote:
Anyway, none of this is worth litigating really, because the exercise needs tweaking to make Bayes swing the decision, or what's the point?
the point is that it is a simple example i thought of off the top of my head to introduce bayes theorum by illustrating how we need to take account of several numbers to work out how often he has a set when he shoves. savvy?
**Official MOP Study Group Thread** Week I Quote
01-17-2008 , 10:27 PM
Quote:
Originally Posted by ygunstah
Your *losing* 35% (less, in fact). You're way out in front.
Quote:
Originally Posted by bozzer
sorry, yes, he has a set 35% of the time so we are winning 65%.
These are contrived classroom examples. Playing in the real world will be tougher.
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 01:29 AM
Sorry, I have not much time these days. And how it looks like, I cannot participate in this intesting thread for the next 3 weeks. But I will try it. Before I fly away, one last word from me.

The section "Estimating Parameters: Classical Statistics" (p. 32 ff.) is written very paltry. No word about consistency of estimate functions. No word about required sample sizes. No word about one a priori information witch is true with 100% certainty; the rake problem and its impact on win rates.

And the example on page page 34 is either ambiguous or wrong. If you have a standard deviation of 2.1 BB/h than your standard deviation for 100 hands will be according to (2.4):

sigma/100h = 2.1 * square root(100) = 21 BB.

Therefore the 95% confidence interval for the player's win rate is according of my view in this example as follows:
1.15 BB/100h - (2 * 21 BB) = -40.85 BB/100 <---> 1.15 BB/100 + (2 * 21 BB) = +43,15 BB/100

The problem is, that only the expected value and the variance is additiv. There is one estimater for the variance of the empiric mean (win rate) witch is variance / sample size; but no unbiased estimator for the standard deviation.

Please discuss !!!
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 05:00 AM
Quote:
Originally Posted by McSeafield
Sorry, I have not much time these days. And how it looks like, I cannot participate in this intesting thread for the next 3 weeks. But I will try it. Before I fly away, one last word from me.

The section "Estimating Parameters: Classical Statistics" (p. 32 ff.) is written very paltry. No word about consistency of estimate functions. No word about required sample sizes. No word about one a priori information witch is true with 100% certainty; the rake problem and its impact on win rates.
Well, information that is true with 100% certainty isn't a parameter, so *shrug*. What does the rake have to do with win rates? I mean, if you are measuring your win rate as an empirical piece of data, the rake is built in, right?

This discussion of estimating parameters isn't really worthy of a textbook; that's partially because it just isn't that important in poker discussion and we didn't want to spend much time beyond "blah blah this is a confidence interval, etc." In real life, you want to filter all your data through priors, so classical statistics is kind of a weak methodology anyway.

Quote:
Originally Posted by McSeafield
And the example on page page 34 is either ambiguous or wrong. If you have a standard deviation of 2.1 BB/h than your standard deviation for 100 hands will be according to (2.4):

sigma/100h = 2.1 * square root(100) = 21 BB.

Therefore the 95% confidence interval for the player's win rate is according of my view in this example as follows:
1.15 BB/100h - (2 * 21 BB) = -40.85 BB/100 <---> 1.15 BB/100 + (2 * 21 BB) = +43,15 BB/100

The problem is, that only the expected value and the variance is additiv. There is one estimater for the variance of the empiric mean (win rate) witch is variance / sample size; but no unbiased estimator for the standard deviation.

Please discuss !!!
It is a little ambiguous if you don't think carefully about what's being stated.
The SD/100h isn't the standard deviation for a 100 hand SAMPLE, it is the standard deviation PER 100 hands within the sample. The sample size here is 16,900 hands, as is repeated throughout the example.

Suppose we normalized everything to a per hand basis. Then the win rate would be 0.0115 bb/h and the sd would be 2.1 bb/h. Now we want to talk about a sample of 16,900 hands. As you say, only expectation and variance are additive.

EV_16900 = 16900 * 0.0115 = 194.35 bets
VAR_16900 = 16900 * (2.1)^2 = 74529 BB^2
SD_16900 = sqrt(VAR_16900) = 273

Now we can leave this here, so a 95% confidence interval is:
194.35 bb +/- 546 bb in 16900 hands.

Maybe you can look at that and intuitively grasp what it means, but I'd prefer to divide it by 169 to get a better handle on it (relative to what I know about win rates):

1.15 bb/100 +/- 3.23 bb/100 (PER 100 hands IN 16900 hands)

Since the renormalization of the SD number occurs AFTER the sample size has already been considered, it's perfectly accurate. We don't care what the appropriate standard deviations for a 100 hand sample would be, just because we want to express the win rate and stddev in terms of "per 100 hands." So (without checking carefully) your +/- 40 BB numbers are the answer to some different question.

By the way, hi everyone. I can't promise that I will stay right on top of this discussion, but I'll pop in on a regular basis to comment.
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 05:07 AM
Quote:
Originally Posted by bozzer
btw, an 'unconditional' probaibility is a probability that does not depend on anything else, but the term isn't used much as far as i know.
I think it's not used that much because there really isn't much call for treating a single probability in this way. Normally we are only concerned about whether two probabilities are independent or not, so that we know that we can shortcut their conditional probabilities because P(A|B) = P(A). I mean, every probability equation designed for independent probabilities can be rewritten for dependent probabilities by using the appropriate conditional probabilities. It's just that if the events are independent, you get to plug in P(A) for P(A|B), which often makes the equations much simpler.
**Official MOP Study Group Thread** Week I Quote
01-18-2008 , 05:18 AM
Re: Using Bayes' Theorem

I don't do formulaic Bayes' Theorem calculations at the table very often. By formulaic I mean thinking about the big equation with the bars. I mean, I guess they are related to BT and all, but most of the time, I just think about how many of each hand type the guy can have.

Suppose the board is AKx and you have a set of kings, and your opponent has shown strength preflop and postflop, such that you figure that his value hands basically contain AK and AA, and you need to know how frequent these are relative to each other. There are 3 AK and 3 AA left in the deck, so it's even money between them. You don't need conditional probabilities to do this, even though you could do the calculations that way.

I think two key ideas about Bayes' Theorem in practice are:

a) Know what your priors are before the dude acts. This is a common error I see people make, in that they don't estimate the prior distribution until their estimate has been tainted by what the guy actually does.
b) Practice and develop an intuitive sense of how much your priors affect the outcome (using BT calculations in the lab to confirm). In the book we talk about this "guy sits down, he's x% to be a maniac, y% to be a normal. then he raises, what's the impact on whether he's a maniac?" This kind of question trips people up a lot when I ask them offhandedly.
**Official MOP Study Group Thread** Week I Quote

      
m