Open Side Menu Go to the Top
Register
my luck analysis project my luck analysis project

09-29-2009 , 08:58 PM
reading armor's thread inspired me to do my own luck analysis project.

i think it's possible to account (partially of course) for you being lucky or unlucky on the flop in a practically unbiased way.

i already used it on my own data, but to create an even better tool i would need data from other players.

first i will describe what i did with my own data.

i know what range of hands i would never fold preflop. so, using HEM filters i found the following:

N=95392 - hands i would never fold pf AND saw flop
n=14738 - hands i would never fold pf AND saw flop AND flopped TP or better

dW=195.5BB/100 - the difference in winrates between the n hands described above and the (N-n) hands, i.e. hands that saw flop but didn't flop TP or better.

note: i calculated the uncertainty for dW and found that it has a negligible effect on my winrate correction, compared to a more important source of uncertainty which i will describe below.

i was then interested in the following questions: since i got back from my 1.5 month vacation i may have inadvertently changed style. so far my winrate has been below my overall winrate so it would be great to know if it's due to bad luck or style change. i'm obviously not hoping to estimate all luck, but even estimating small portion of it would already be cool. so, since i got back my data is as follows:

N2=6232 - hands _this month_ i would never fold pf AND saw flop
n2=949 - hands _this month_ i would never fold pf AND saw flop AND flopped TP or better

so now the question is - did i flop more good hands or less this month than i should have? if i had an infinite sample of previous hands that would be easy to answer precisely. given my limited sample i can answer this question with some uncertainty:

from N and n, i estimate that i should flop TP or better 15.45% of the time (=n/N) with a sigma of 0.10%. this, btw, is the crucial step where other people's data can come in - this sigma is the biggest source of uncertainty, and if we all agreed on the exact range and the exact filtering procedure we could greatly reduce this sigma and the result would be applicable to everyone

from N2 and n2, i see that this month i flopped TP or better 15.23% of the time. basically, i should have flopped a good hand

n2'=15.45*N2=962.84=n2+13.84 times

so now, the correction to my winrate this month is:

WR=13.84*dW=13.84*195.5BB/100=13.84*1.955BB=27.1BB - how much more i should have won

due to the abovementioned 0.10% uncertainty in n/N that result has a large uncertainty as well, roughly 13BB.

translating these extra winnings that i'm "entitled" to into the winrate for the 8k hands this month leads to the following correction:

if i flopped TP or better fairly this month my winrate would have been 0.32BB/100 higher with an uncertainty of roughly 0.16BB/100

not too bad of a correction. this can clearly be done for other situations that occur on the flop. basically this result alone can potentially get rid of about 2% of the overall variance (provided that the 0.10% is greatly decreased from other people's data or - better yet - a simulation).

before asking anybody to provide their own data i would like to give an opportunity for the vultures to pick at the tender and delicate flesh of my baby project.
09-29-2009 , 11:36 PM
People have developed a good system for that. It can be found on the paradime poker website. It is used to determine win rates when luck is removed for sessions with the featured sonia bot. If you go to the site this post will make more sense.
09-30-2009 , 01:00 AM
Quote:
Originally Posted by cardinosaur
People have developed a good system for that. It can be found on the paradime poker website. It is used to determine win rates when luck is removed for sessions with the featured sonia bot. If you go to the site this post will make more sense.
ok, i thought of like a gazillion sarcastic remarks after reading your post that my head almost exploded but i'll try to contain myself. so here goes: it really doesn't matter that you can remove some portion of luck for sessions with sonia, because that method is not applicable for your actual games. i would think this is obvious.
09-30-2009 , 02:49 AM
Quote:
Originally Posted by GrizzlyMare
ok, i thought of like a gazillion sarcastic remarks after reading your post that my head almost exploded but i'll try to contain myself. so here goes: it really doesn't matter that you can remove some portion of luck for sessions with sonia, because that method is not applicable for your actual games. i would think this is obvious.
wow you must have an annoying personality. i would think this is obvious though.

sorry for the off topic post leader.
09-30-2009 , 03:34 AM
i don't want to turn this thread into a debate about my posting style.

right now the only available tool for removing small portions of luck is the all-in tool, but it only works for nl. what i've done has the potential to remove quite a bit of luck from the hulhe games. just the TP+ grouping alone can actually remove about 8-9% of the total variance (the 2% figure i gave in the op is incorrect, it was calculated for a more narrow grouping) if we can get close to the exact probability of flopping TP+.

there's one potential pitfall in what i've done (most likely nothing too bad), but i'd rather have somebody else point it out, since that would indicate they clearly understood the problem.
09-30-2009 , 04:27 AM
yeah, my bad. i have nothing besides that to contribute and ill stay out.
09-30-2009 , 07:09 AM
err, i don't see why you cant apply the same techniques.
09-30-2009 , 09:01 AM
actually you cant use a lot of the techniques, wasent thinking. anyways gl with this ******ed project.
10-01-2009 , 08:54 PM
Quote:
Originally Posted by GrizzlyMare
there's one potential pitfall in what i've done (most likely nothing too bad), but i'd rather have somebody else point it out, since that would indicate they clearly understood the problem.
Whether or not dW is an accurate winrate correction term after your holidays?
10-01-2009 , 10:31 PM
Quote:
Originally Posted by TSchauenberg
Whether or not dW is an accurate winrate correction term after your holidays?
that's not the pitfall i was thinking about. for dW i could use either the winrate with good hands for my overall sample or just for the 8k hand sample since my vacation, both will result in variance reduction - as long as the uncertainty in dW (either due to variance or due the possible change of your style) is substantially smaller than dW itself, which is clearly the case when dW is as large as in my example.

for samples substantially smaller than 8k you would definitely want to use the overall winrate, because of the high variance in the small sample. even for 8k i feel it's slightly better to use the overall one.

EDIT: for those familiar with DIVAT, the fact that i can use either of the two winrates for my correction is the same idea that in DIVAT you can obtain correction terms in lots of different ways, each one of which will result in variance reduction as long as they even remotely approximate the objective corrections.

Last edited by GrizzlyMare; 10-01-2009 at 10:49 PM.
10-06-2009 , 12:18 AM
Not sure if one of these is your 'pitfall', but I have two sources of bias so far:

There is a card removal effect (when you see a flop you may be more likely to be sharing cards with your opponent, reducing the chance you flop TP or better).

Some subset of your hands are far more likely to flop top pair than the total set of no-fold hands (large pairs). If (you have an SB openlimping strategy that is unbalanced accross this metric AND your opponent folds his BB > 0) OR (you have a BB 3bet strategy that is unbalanced accross this metric AND your opponent raise/folds > 0) you may introduce a bias.

It is impossible to accutely estimate the magnitude of error one of these biases might introduce, as it depends entirely on interaction between your and your opponent's strategies. You could guess at some cases with close to maximum interference and see if the biases are significant, but this would require a substantial simulation effort.

edit: the above only come into play when your opponents' strategies are inconsistent, but there is also a possible correlation between Ni and ni that is sample size dependent (driven from the same large pair bias described above), which is probably what you were thinking of?

Last edited by admiralfluff; 10-06-2009 at 12:33 AM.
10-06-2009 , 02:53 AM
your second source of bias - the fact that your ranges for openlimping and openraising are different in their ability to flop top pair - is something i haven't thought of. i guess it didn't enter my mind because i don't openlimp, but of course there are some good players who do. for them the problem can be partially fixed by multiplying the openlimping hands by the factor equal to the probability that our average opponent folds when we openraise - that would weigh the openlimping hands fairly. that's not a complete fix because when we openlimp our opponent sees the flop with 100% range, as opposed to cases when we openraise, where he typically sees the flop with something like top 80-90%.

which brings me to the pitfall that i had in mind:

what if the range that your opponent saw the flop with in the sample you're trying to estimate your luck for (8k hands in my example) is tighter/looser than for the overall sample? you are somewhat less likely to flop top pair or better if your opponent's pf range becomes tighter.

i investigated the magnitude of this effect. basically if we compare the chance of flopping TP+ vs. 100% range with the chance of flopping TP+ vs. THE BOTTOM 13%, the difference will be only 0.3%. simple math then shows that for 100% vs. the top 87% the difference will be only 0.04%. in reality, it's extremely unlikely that your opponents in the smaller sample will be so drastically different from your opponents in the overall sample. so basically, for all intents and purposes the magnitude of this effect is negligible, especially if the smaller sample is as small as 10k hands (the smaller the sample, the higher the ratio of [deviation from fair]/[error in our estimate of what's "fair" due to the pitfall]).

i believe that takes care of your first source of bias as well.

      
m