Secondly, I cannot do better than quote my post from the other thread.
Quote:
Originally Posted by whosnext
This is a situation where Bayesian reasoning is directly applicable. As you probably know, Bayesian updating requires some measure of your "prior" beliefs (in this case the 60%, but also how strong your beliefs are).
Using a beta prior distribution, the posterior distribution also is beta. The posterior mean is the ratio of the updated fraction of successes where the strength of your prior belief is reflected in how many "prior samples" you have.
For example, if you are very sure of your prior belief that your opponent's percentage is 60% (or whatever), then observing one more data point will not change your view much. Of course, on the other hand if your prior belief is pretty weak, then one (or a few) observations can nudge your updated views quite a bit.
I recommend fiddling around with the hypothetical prior sample size N that best reflects the strength of your belief. For example, how many hands you have previously played versus this opponent, etc.
Then your best updated percentage is simply ((P)(N)+S)/(N+T) where P is the prior expected percentage (in your case 60%), N is the prior sample size, S is the number of new "successful" observations you observe (e.g. 2), and T is the number of new observations you observe (e.g. 5).
In words, you simply update the percentage where you treat your prior as having N samples.
Hope this makes sense.
This "Bayesian" approach differs from the "Classical" approach in that it is not focused upon testing the hypothesis that your observed frequency (W) did not come from some assumed-to-be-true binomial process with underlying probability (P).
Instead the Bayesian approach asks the question: given that I believed prior to observing new data {frequency W} that the underlying process was binomial with probability P, what is my new belief about the underlying binomial process {what is the new probability P'?}.
So if you observe a little data with W near P, you would not move your new estimate of P (call it P') much at all (P' would be very close to P).
But if you observe a lot of data with W far away from P, you would rightly move your new estimate of P (P') significantly (P' would be far away from P).
Bayesians talk about their "prior" belief about P and then, after observing new data, updating their belief about P with a "posterior" distribution.
The formula above, ((P)(N)+S)/(N+T), is a way to update your belief about the underlying process.
For example, if prior P=.22 in a prior sample of 1,000 (N=1,000), and you observed a new sample of 28 "successes" out of 100 new sample observations, then under the assumptions described in the previous thread's post, your "posterior" updated expectation for the binomial probability is given by:
P' = [(.22)*(1000) + 28] / [1000 + 100] = (220+28)/1100 = 248/1100 = .2254545
You will readily see that the more samples that the W is based upon, the more you will adjust your updated frequency estimate. Of course, if N=T, then your updated estimate is simply the average of your prior belief and your new sample frequency. And if your sample is quite small relative to the number of your prior observations, you will not adjust your estimate very much based upon the new sample data.
One "challenge" with this specific approach is that you have to have a good sense of what N is, the number of previous observations from which the prior P sprang. Of course, this reflects how confident you are in the prior P.
This is where the previous thread recast the Bayesian approach listed above to one dealing with an estimate of the standard deviation (either from prior observations or as a measure of your "confidence" in your prior estimate) around your prior estimate of the underlying binomial frequency.
Under this specific approach of using a standard deviation, new formulas were posted to derive the updated "posterior" estimate of the binomial frequency.
Last edited by whosnext; 01-27-2016 at 02:09 PM.