i-Poker has now been tested for pre-flop all-ins - these were similar to other bad beat tests that have been performed.
In total, over 18 million hands were tested. The reason the testing was so extensive was that in the first two tests a significant bias was found in favour of dominated hands and it was deemed necessary to carry out widespread testing.
Here is a brief run through the results and some discussion (for full details see this web-page
). Each hand sample is 1.1 million hands apart from sample F which is 11.5 million hands.
Samples A and B were from 5c/10c games (FR and 6-max respectively) and were from May/June/early July 2011
Sample A, Dominated hands: +3.38 standard deviations from expectancy (this should occur once in approx. every 1380 tests)
Sample B, Dominated hands: +2.78 standard deviations from expectancy (this should occur once in approx. every 230 tests)
Samples A and B were merged to see how likely both results occur consecutively was. The result for samples A and B combined was +4.28 standard deviations from expectancy (this should occur once in approx. every 53,500 tests).
Widespread testing was then carried out to see if the bias was widespread. 6 more hand samples were used and more than 16 million hands tested. Hand samples were taken from different stakes during a similar time periods to samples A and B and also from the same stakes as samples A and B (5c/10c) but during a different time period in 2012.
Results from these tests showed no significant deviations from expectancy.
There are many possible explanations for this isolated bias, although it should be noted that these explanations are speculation and the aim of these tests is to test for a bias and not to explain why it occurs
. However, here is some speculation...
1) The bias could be due to variance. However, this is unlikely as the result should only occur once in every 53,500 tests (about 50 tests have been run). This does not however mean that it is impossible.
2) The bias could be due to a bias in the deal. However, this also seems unlikely as it would be expected that a bias in the deal would be consistently shown throughout all tables and time periods.
3) The bias could be due to widespread collusion at 5c10c 6max and full ring tables during May and June 2011. It has been suggested that players colluding would introduce a bias into the results of the bad beat tests since if a players holds AK, AQ etc. and was aware of that another player that they are colluding with also holds an ace or a king they would be unlikely to call an all-in from a third player. If such collusion was widespread enough to influence the results of tests which are run over millions of hands it is likley that it is 'bots' that are colluding together rather than players that are colluding together.
Some points worth discussing here:
Margin for Error:
The margin for error due to the Monte Carlo part of the Poker Tracker report was significantly reduced for the dominated hands test of samples A and B. This was achieved by running each test 10 times and then finding the mean result.
Card Removal Effects:
The results in samples A and B cannot be attributed to card removal effects as card removal effects should be reasonably consistent across all bad beat tests and the results of tests on samples A and B have not been witnessed in any other tests that have been run by Online Poker Watchdog.