Quote:
Originally Posted by dsh_spb
BTW, in your response you are right. But, please, explain what kind of evidence would deserve your seriouse attention. Is it possible for an individual player to get an evidence of that kind? How?
The posts you mention are a typical rigtard way of doing statistics. Here is how they function:
The rigtard plays a bunch of hands and notices that he has done badly in one particular area.
In this case it is set over sets, in another it might be losing to an ace coming on the flop when he has K-K, in yet another it might be losing to a flush card coming on the river.
The rigtard then analyses his data, if he is capable of doing so, and finds that indeed he did lost to set over set more often than he should. Aha, proof of rigging, he thinks.
Perhaps if the rigtard had analysed how often he wins with Ax against KK, or hits a flush, he would have concluded it was rigged in his favor, but of course that is not the goal of this person.
In any case this is totally *NOT* how statistics is done. Effectively they are collecting the data first, then constructing a hypothesis that fits the data. If they had lost to a bunch of flushes that would have been their hypothesis. The hypothesis is guaranteed to fit the data pretty well, otherwise a different hypothesis would have been chosen. Additionally there is a tendency to cherry-pick, i.e. the rigtard in question lost a staggering number of set over set hands in May, but didn't lose many at all or even won several in April, but chooses to only analyse May's data (presumably the Poker site singled him out for special treatment in May).
The way statistics is actually done is to come up with a hypothesis and then collect the data.
i.e. the Rigtard should say 'Online Poker is rigged. Set over set hits way too often', *then* go and collect some pre-determined amount of data and see if it matches his hypothesis.
It's fine to use 'data that has already occurred' as long as you haven't analysed it in any way before picking a hypothesis. In other words you could say 'set over set happens way too often on a site', then go analyse a large unbiased hand sample if you have one, but not okay to say 'man I lost a lot of set over sets this month, they happen way too often on a site'.
Let me give a concrete example. Make a computer simulation to roll a 6-sided dice a million times. If you can't be bothered (I couldn't either), it still works as a thought experiment.
See which number came up the most often in the million rolls. *THEN* make a hypothesis that says 'the dice was rigged, *number* comes up too often'. If the data is pretty evenly spread you can pick a different hypothesis that does fit the data, like 'there are too many 'pairs' of consecutive identical rolls', '4s follow 6s way too often'. What do you think the chances are you can't find *some* hypothesis of rigging that the data will back up, even if the data was indeed purely random? Do you see the difference between doing this, and making the hypothesis first then rolling the dice a million times?