I wrote this math proof way back then. I thought it was right then and assume it's still right
.
Two players are in a showdown situation, with b board cards yet to be seen (b=1, 2, or 5). If a player has a winning probability of W, then, prior to any board cards being dealt, W is the probability that each set of b cards in the remaining deck will give the player a win. For example, the second set of b cards is equivalent to the dealer burning off b+1 cards before dealing instead of the usual one burn card. If the showdown is run twice, then the player’s expectation can be found in the following way:
Let
P = total pot
X1= 1 if a player wins on the first run; else X1 = 0. The same is true for X2, the random variable representing the second run result. X1 and X2 are binary random variables.
Then the expected value of X1 is E(X1) = W*1 + (1-W)*0 = W and the same for X2, since we are defining these variables prior to any action.
For each run, if the player wins, he wins P/2 according to the standard Payout for running it twice.
Therefore, we can write the amount won in 2 runs as
Amount Won = X1*P/2 + X2*P/2 = (Sum Xi)*P/2
Since the expected value of a sum is equal to the sum of the expected values,
EV =Sum E(Xi) * P/2
But, E(Xi) = W for all i; therefore
EV = 2W*P/2 = W*P
But, W*P is the EV for the player if it is run only once, proving that EV does not change with running it twice. The above proof easily extends to running it r times.
To show that the
variance for RIT is lower than it is for running it once (RIO), we note that Var(cX) = c^2Var(X). Since the variance of a Bernoulli variable with mean W is W(1-W), we have,
Var(RIT) = 2*(P/2)^2*W(1-W) =1/2 * P^2*W*(1-W)
<= P^2*W(1-W) = Var(RIO)