Quote:
Originally Posted by PokerStars Baard
Hello,
I am not planning to take take part in the discussions in this thread, mainly because it is a place for the players to discuss the meetings among yourself. So if I don't reply to questions, please don't take it as an insult.
However, what is quoted here is a clear misunderstanding. In the numbers that DD is referring to, we looked at the results of ALL players who had played a specific game in 2015. So not only is the sample representative, it also covers a long time-period (12 months, in this case).
From this data, we look at how the money lost by the losers in the game is distributed on rake, rewards and winning players. The result (Rake - Rewards) / Player Winnings, us a useful metric for seeing how rake sensitive a game is, and also to get an idea about how dependent the players are on rewards to earn a living in this game.
It can obviously be discussed whether or not this method is suitable to evaluate the state of a game, but there was definitely no cherry-picking of a sample to make the numbers support our point of view.
Thanks,
Baard
Hey Baard
As others have pointed out, the metric you describe has all the biases accused of it when looking at high stakes games. To put it in the simplest way possible, if you have 20 regs of equal skill and 4 recreational players who play 50/100 during the year, the correct method for determining how much money the regs are winning from the games would be to take the total winnings of the 20 players to determine how much the regs win and to take the total losses of the 4 recreational players to determine how much money is lost.
However, because of sample size and variance, the method you described does not work. The samples at 50/100 will be such that 5 of the equally skilled pros may run bad enough to lose money for the year at 50/100. Therefore you overestimate the winnings of the pros by cutting out the ones who ran the worst, giving you an overestimate of the average expected winrate of "winning players" in the game. Also, you overestimate the amount of money that is put into the games by losing players (because you are adding losses of the winning players who ran bad).
Putting these two things together gives you a very distorted picture and not at all what you are claiming to be looking at. This should be clear if you imagine multiplying the sample by 50. If your method were correct, you would have the same ratios. However, they would of course be different because now all the winning pros would be winning and the average winrate of players who actually won would be lower than your estimate of average winrate. To make it even clearer, imagine using your method over the course of a single session. The average winrate for "winning players" that it gave would be absurdly high.
So the method gets more accurate as the sample increases. 1 year for high stakes games is just not a big enough sample for this to be accurate though. There will be many players who were +EV but took failed shots etc adding to your "money lost by losers" with the ones who ran better staying in the games and adding to "money won by winners". If you have done analysis that shows the ratio gives answers close to the ratio that would be gained if you did it properly then fair enough (although why not just do it properly in that case) but if there is really just nobody at the company who understands the skew that is being put into the results using this method and it is actually being used to make important decisions, that's a really sad state of affairs
Apologies if I am just misunderstanding what you are using this metric for exactly and you realise it's limitations but I don't get that impression from your post and the posts from the people who were at the meeting.