Quote:
Originally Posted by Angrist
Please. Eli Manning is the GOAT. Only man to beat Tom Brady in a Super Bowl. Twice. So clearly DGAF is just a high variance fish that sucks at poker. /s
Although that is one of the more perfect descriptions of The Abyss I've read.
I get this:
If I calculate standard deviation over time.
Looks like it "converges" quickly at first, then it doesn't. Then there's a decent amount of difference in 500 hour samples (although not that much "noise"). bip! would know off the top of his head.
I'm not sure how to calculate it exactly mathematically, so I ran some simulations to get a very close estimate. Each bracket shows the percent chance of a random sample being within 2.5BB, 5BB, 7.5BB, 10BB, 12.5BB, 15BB of actual expectation.
SD per HR = 60BB and 100 Hour Random Sample:
WR - [32, 59, 79, 90, 96, 99]
SD - [45, 76, 92, 98, 100, 100]
SD per HR = 60BB and 500 Hour Random Sample:
WR - [64, 93, 99, 100, 100, 100]
SD - [81, 99, 100, 100, 100, 100]
SD per HR = 80BB and 100 Hour Random Sample:
WR - [25, 47, 66, 79, 89, 94]
SD - [34, 62, 81, 92, 97, 99]
SD per HR = 80BB and 500 Hour Random Sample:
WR - [49, 83, 96, 99, 100, 100]
SD - [68, 95, 99, 100, 100, 100]
how to read those results - for example, if we have a 5BB WR expectation and a true SD per hour of 60BB and we take a random sample of 100 hours of our results, there is about a 59% chance that the random sample would show a winrate between 0BB-10BB and about a 76% chance that the random sample would show a SD per hour of 55BB-65BB.
i've put my Python3 code into a spoiler as pro programmers may wish to avert their eyes. I welcome any input on methodology as I'm not up on any current theories of modelling live play. (for example - if 5% of the time, our table conditions are "awesome" but with much higher variance, how should/could we model that)