Most people who claim this, since absolutely forever as far as I can remember, point to pokerdope simulations, and if we go there it´s true. Just make two sims:
A) 5 bb/100 wr, 100 bb/100 std dev, 30000 hand sample. You should lose 19.3238% of each 30k hand sample you play, up to infinity.
B) 10 bb/100 wr, everything else equal. Now, you should see a loss only 4.1632% of the time.
But how are winrates generated in the first place? Aren't most strategies supposed to lead to improvement, like stop being a nit and open up your game, at least to optimal frequencies, or improving your river bluffcatching skills in closer spots, value bet thinner vs stations, bluffing optimal frequencies, acting to increase the standard deviation of your strategy, compared to the nit?
Is the paragraph above true? If not, why? Does pokerdope really model poker accurately?
Broader question: is the standard deviation an independent variable relative to the winrate? Does it increase when winrates increase? (Let's take whales out of the equation and think of a nit or a bad reg that becomes a good reg etc). That is more or less what bothers me
Standard deviation doesn't directly correlate to the player's win rate. It's an independent variable.
Poker is a zero sum game, meaning you and your opponent have the same sdv dev in HU pots, even though your win rates aren't the same. Different strategies do however tend to produce different standard deviations, based on how much money is being put into the pot on average. This is why nits tend to have lower std devs than strategies that are closer to the GTO.
Let's say a player like Seta-Beni, playing 27/22/12 at 200z and crushing it for 8 bb/100 over millions of hands, decided that it would be a good experiment to play 20/16/7 while keeping everything else unchanged. Wouldn´t his std dev, and also winrate, be lower, exact because the avg pot of the entire sample will be lower, even though he will be winning more when putting money in the pot?
Agree direct correlation doesn´t exist, as the inverse happens between a fish and the nit. Still, unless I´m missing smth, I see both stats more or less connected to each other.
Haha would make a similar argument for clarification: I never bluffcatch, vs fish, with close hands, like let's say, 25.1-28% equity vs a half pot river bet in a line I can accurately estimate via MDA that iI have those equities and can call, then all of a sudden I begin doing it. My winrate increases, my variance also increases due to the way those spots work (you rarely win, when you win you make enough to compensate then a bit extra = profit).
I´m Pasting, then all of a sudden I stop being Pasting, become a nit at 2nl and become a 0.5 bb/100 winner.
The point I´m trying to make is that there are adjustments you can make to increase your winrate that also affect positively your std dev, others increase the winrate and affect the std dev negatively. No direct correlation between wr and std dev, no inverse correlation between them either, but adjustments that affect one also affect the other.
5 bb/100 with 75 std dev requires 1685 bb in bankroll for less than 5% risk of ruin.
7.5 bb/100 with 100 std dev requires 1997 bb in bankroll for less than 5% risk of ruin.
Which is why I think the impact of your improvements in the std dev may have some importance when planning your BRM requirements.
Ofc I picked the winrates and std dev to show my point, if you go from 5 to 10 bb/100 with 100 in standard deviation, then all of a sudden you will need less in the bankroll. But aren't increases of 100%, in any games other than the lowest micros, exponentially harder to achieve, compared to the more down to earth 50% one from 5 to 7.5 bb/100?