Quote:
Originally Posted by turn & fall
NO, NO, NO, NO.
Given the sample you have it is 95%. However your sample is just a sample it is not population.
So what you're saying is, the math is correct, but for the wrong thing. I have a 95% chance to be a winning player when I get dealt the same "luck" (good and bad included) that I have had in these 12 sessions, but if my luck changes in my next 12 sessions then I may have a totally different range for those 12 sessions that I have chance to be within? (example: When I'm running bad, there is a 95% chance that I am between W and X BB/100...but this 12 session sample above has a 95% chance to be between Y and Z BB/100). Is this what you're saying?
I'm not sure I learned the Bayes theorem in Stats I and II, but I'll try to refresh myself on that and priori distribution I guess. Did you mention these because you're saying CI's don't work for this type of problem? Maybe you're saying that because Poker has the "skill" and "luck" aspects it means that a CI is impossible to use? If you were taking a test and had averages of what students made on a test, you wouldn't include the averages of students that took a test on material from the same chapter that were given different questions. But if that is the case, then every hand from dealt cards to river should be in its own sample (when I have KK and an Ace flops, a 2 comes on the turn, and a 4 comes on the river, there a 95% chance I make/lose between A and B in this hand)..unless you can include all forms of luck in Poker to be "Poker" as a whole, and know that sometimes the best hand does not always win (so players can create their own luck too and not just run good). If you don't make each hand a different "test" that belongs in its own category then you actually have some data to work with. I've lost some hands I was supposed to win, won some hands I was supposed to lose, folded the best hand, and called with the worst hand. So it's not like I've just never lost a big pot. I think my 2nd or 3rd session was losing a BI right off the bat.
Does it not make sense though that after 12 sessions of me playing $1/$2 blinds, with me being up $844 after 50 hours, that there is probably a 90% chance that I am a break even player, and a 95% chance that at worst I'm a 3% loser? In these 12 sessions, I have been buying in for $100 instead of the full $200. This makes it easier for opponents to call the $30 on the end or whatever because "it's just $30 more." But if I'm buying in for $100 every time, I'd have to lose 8.44 BI's to break even..so does it not seem like there's probably a 10% chance that I could lose 8.44 BI's in the next how ever many sessions I play and a 5% chance that I come out being negative? Surely that seems pretty reasonable? A big part of statistics is seeing whether or not your final result looks like it makes sense or not too.
Thanks for the input so far everybody! Glad people can actually think outside the box/show me the light/whatever and not just think "too small of a sample size, I know nothing about statistics, but that's what I read somewhere sometime and don't really think for myself."