Quote:
Originally Posted by PokerPlayer66
Standard deviation is basically opinion since not all information is known.
In the case of AA, you just compare it to all possible hands preflop and you know you're the favourite no matter what.
Standard deviation is just a random formula invented to tell us what the guy who invented the formula thinks is a reasonable distribution of results. No, I don't have a PhD in statistics and work with numbers all day, but then again I'm guessing not many people in this forum do either. Common sense tells us though that anyone can write a formula and say this shows what is reasonable. Why is that specific formula correct? Why couldn't we use a different one? We could, evidently. It's just become convention to use standard deviation instead.
It's clearly nothing at all like equity calculations, where all possible outcomes are known in advance and can be calculated.
On all-in's, all information IS known. With two people all-in preflop, there are 4 cards known and 48 cards unknown. That's exactly 1,712,304 combinations for how the board will run out. It's EXACTLY known how many of those combinations you will win and how many the other guy will win.
Once you know that exact probability, let's say you run 100 hands with the same exact probability. The math changes a bit because probability is usually slightly different each time, but that doesn't really change the concept. Out of those 100 hands with probability of, say, 60% that you win, it is mathematically PROVEN that the distribution approaches the normal distribution.
If you flipped a coin 100 times, you would not expect expect exactly 50 heads. The chance of that happening is only 8% ((100 chose 50)*(0.5)^50*(0.5)^50 if you want the gory details). This 8% is a mathematical fact with the binomial distribution. But how likely is, say, being worse than 60 heads? Well, it's a mathematical fact that the distribution of the number of heads approaches the normal distribution.
The formula for the standard deviation is somewhat arbitrary. The variance is the sum of (x-mean)^2/n and the SD is the square root. Why not the absolute value instead of the square? It is arbitrary, but the important thing is that the standard deviation, as defined, is part of the definition of the normal distribution. It is a mathematical fact that the binomial distribution approaches the normal distribution, with the same standard deviation. The binomial distribution gives the same result without using standard deviation at all, it is just more complicated math.
If you need more convincing, start with very simple bernoulli trials. Then work your way up to a binomial distribution for two coin flips. 50% of two coin flips will be 1 heads, 25% 2 heads and 25% 2 tails. For 3 coin flips, H=0 is 1/8, H=1 and H=2 is 3/8 and H=3 is 1/8. You can go through the trouble of using the Excel formula =IF(RAND()>0.5, 1, 0), which will be a coin flip. Paste that in three different cells and sum them up. If you keep pressing F9 and recording how many are "heads" each time, it will approach these values. Again, these are mathematical facts.
The numbers get more complex as the number of trials goes up, such as my example with 380 all-ins. But the concept remains the same. Actually do the Excel formula trial with coin flips to see it in action if you need more convincing.