Expert Heads Up No Limit Hold’em v.1: Optimal and Exploitive Strategies by Will Tipton
I know zero about programming but I am very interested in learning how do what you have done.
If you do not make a series, is there anywhere else you can point me towards for starting out? is Python the go to coding for this nowadays? What about excel game trees?
As well, +1 for making volume 1 and soon to release volume 2. Have enjoyed your book and it helped confirm my suspicions of error prone poker 'math' that is peddled at various training sites/books.
If you do not make a series, is there anywhere else you can point me towards for starting out? is Python the go to coding for this nowadays? What about excel game trees?
As well, +1 for making volume 1 and soon to release volume 2. Have enjoyed your book and it helped confirm my suspicions of error prone poker 'math' that is peddled at various training sites/books.
As far as programming language, it's largely a personal preference. I think iPython ("interactive" python) would be a great choice for that sort of video series, since it is a very powerful language -- you can express some pretty complex ideas with relatively little code. Also, if you're clever, it's very easy to use it to visualize things like ranges, game trees, etc., which is helpful. The downside is that it's not very fast. I'd recommend iPython as a good place to start, though, if you're interested in getting into poker programming.
http://forumserver.twoplustwo.com/15...-game-1305801/
However, if you're thinking about it correctly, it might be unhelpful, but it shouldn't give you anything blatantly wrong like 11.5=13, so let's go through your analysis...
In particular, indifference relationships are things that apply to specific holdings. For example, at equilibrium in the 12BB shove/fold game, the SB's 53s is (at least nearly) indifferent between shoving and folding. 53s is more-or-less his weakest shoving hand, and it doesn't make much of a difference to him whether he jams or folds it. With AA, however, he's certainly not indifferent -- jamming is way better than folding.
So, I agree with this equation if EQ(SB) refers specifically to the equity of the SB's weakest shoving hand/strongest folding hand when facing the BB's calling range. (That SB hand is 53s, although we don't know that if we're doing this calculation from scratch.)
Well, this doesn't make sense. 1-EQ(BB) is the average equity of SB's whole jamming range versus the BB's weakest calling hand. This is not equal to the EQ(SB) we defined earlier, the equity of a particular SB hand.
Hopefully that shows the error in the calcs, and again, see the thread I linked to earlier for more discussion of solving preflop only games in particular. We'll find a lot of spots where the Indifference Principle is very helpful in finding solutions, but it doesn't always give us all the info we need.
Many Thanks, Will.
I appreciate your taking the time to sort that one out for me.
Yeah, I was definitely thinking, ok, so when they get all in, their equities sum to 1, but I was missing the subtlety of just what equities we're talking about.
And I certainly take your point about the indifference principle not being the way to solve the shove/fold game. Just thought it would be fun/instructive to try it out.
Thanks for the link, too - that was my next 'project', to take a look at the 3-bet/4-bet game.
Good Luck with the second volume (from a guy who's probably going to need another couple of years to really finish the first one. )
I appreciate your taking the time to sort that one out for me.
Yeah, I was definitely thinking, ok, so when they get all in, their equities sum to 1, but I was missing the subtlety of just what equities we're talking about.
And I certainly take your point about the indifference principle not being the way to solve the shove/fold game. Just thought it would be fun/instructive to try it out.
Thanks for the link, too - that was my next 'project', to take a look at the 3-bet/4-bet game.
Good Luck with the second volume (from a guy who's probably going to need another couple of years to really finish the first one. )
There are a couple things to keep in mind here. First, the idea of strategies dividing all hands into nicely-separated action regions depends on the simplifying assumptions that there are no card removal effects and that hand values are well-ordered. These things are usually approximately true on the river, but sometimes not.
Second, even in the [0,1] games the structure of the solutions may not be unique. E.g., maybe we need to check some slowplays on the river from the BB for balance, but we may be able to structure our strategy so that these are drawn from our very nut holdings or from our strong-but-non-nut holdings, and still have an equilibrium either way. When solving games by hand, we make a choice and go with it, but this situation may lead to mixed strategies being played with a lot of hands in the computer.
Overall, it's not too surprising that BB is sometimes checking and sometimes leading river with both top and second pair.
Again, sorry that it's difficult to communicate these trees and strategies exactly via images, but full solutions to the river examples are linked to here:
http://forumserver.twoplustwo.com/33...l#post38858250
That only includes the case for Example 3 when BB can block the river.
Second, even in the [0,1] games the structure of the solutions may not be unique. E.g., maybe we need to check some slowplays on the river from the BB for balance, but we may be able to structure our strategy so that these are drawn from our very nut holdings or from our strong-but-non-nut holdings, and still have an equilibrium either way. When solving games by hand, we make a choice and go with it, but this situation may lead to mixed strategies being played with a lot of hands in the computer.
Overall, it's not too surprising that BB is sometimes checking and sometimes leading river with both top and second pair.
Again, sorry that it's difficult to communicate these trees and strategies exactly via images, but full solutions to the river examples are linked to here:
http://forumserver.twoplustwo.com/33...l#post38858250
That only includes the case for Example 3 when BB can block the river.
I see the first two hand ranges are SB bet 0.5,
BB bet 1,
and they both include all hands. What's up with that?
Then the 3rd one is "SB fold". What exactly is the action there? I cannot see what's he folding to.
This has 100 ranges plotted out; are there any approximations/descriptions of the ranges?
Like, he's leading this hand or better, leading those as a bluff, blockbetting with that, xF that..
Aside from the ones in the book. Those I found hard to follow.
The descriptions in the book are where I do my best to describe the strategies in way that is easily human-digestible. Of course, there is some information loss here, since there's no good way to communicate in English text the exact frequencies of every hand combo in every spot. Unfortunately, the strategies usually cannot be described by easy sentences like "he bets with everything better than X" for the reasons I mentioned in post #424.
You asked for solutions both when we allow BB to blockbet river and when we don't. The solutions given are only for the case when he can.
This is posting blinds.
The ranges in the figure correspond to the the actions in the tree.txt file. That 3rd line is immediately after posting blinds and is the SB's option to open-fold preflop. The next one might say something like "SB bet 1.5" corresponding to an open minraise.
This is posting blinds.
The ranges in the figure correspond to the the actions in the tree.txt file. That 3rd line is immediately after posting blinds and is the SB's option to open-fold preflop. The next one might say something like "SB bet 1.5" corresponding to an open minraise.
The 4th one says "SB bet 6.5".
Of course, the early street play here is kind of faked to just get to the river with the right ranges and stack/pot sizes.
I was hoping through betsizing, but then that 4th line with "SB bet 6.5" throws me off.
Or do you perhaps have the river ranges and river actions separated somewhere from when you were analyzing those specifically?
Edit - I figured it out, that shorter txt file has the ranges that'll probably even be pluggable into crev..
Ah, I see.
The 4th one says "SB bet 6.5".
What's the easiest way to locate the river?
I was hoping through betsizing, but then that 4th line with "SB bet 6.5" throws me off.
Or do you perhaps have the river ranges and river actions separated somewhere from when you were analyzing those specifically?
The 4th one says "SB bet 6.5".
What's the easiest way to locate the river?
I was hoping through betsizing, but then that 4th line with "SB bet 6.5" throws me off.
Or do you perhaps have the river ranges and river actions separated somewhere from when you were analyzing those specifically?
Again, tree.txt lists all the actions in a way that makes the tree structure more apparent, and each line of that file which contains an action corresponds to one of the ranges in the figure. This is all described in the README.txt file.
The ranges in tree.txt are only approximate. Of course an exact listing would need to specify a frequency to go along with every hand combo.
Hey Will,
2nd read and just about as many questions as I had after the 1st one. The questions below are about chapter 1 and 2, that will give you and idea how many questions I have about chapter 5...
YES/NO will do for most of them I think, I just want to make sure I get everything right before moving on.
The way I understand it, you're basically saying that It's OK to consider only a few turn and river cards as long as the small sample is representative of the full game. Am I right here?
When I first read this, I thought: well, sure he SHOULD always go AI if he KNOWS hero was always check/folding, but is he supposed to know that?
So apparently you consider that Villain will bet everytime he's checked to. Yet, the decision tree p43 gives him the option to check.So what's going on here? Is it a case of: We're trying to figure out how our opponent can best exploit us, we consider he knows our strategy, we make him omniscient?
When you say "his new exploitative strategy", do you mean maximally exploitative then? Also, this is probably a very stupid one but... Could a 'partially exploitive' strategy actually have a value that is inferior to the equilibrium value?
In the last sentence, when you say that the profit we gain from our value hands more than makes up for it, your reference point is our equilibrium value, right? You're saying that we will make at least as much money as we would at equilibrium (probably more). But a maximally exploitive strategy could still make us even more money, against such an opponent. Is that correct?
About the example p64:
Is the purpose of the example simply to show that playing exploitative without a very good read can prove costly?
You're saying that trying to induce a change by showing his bluffs was a bad idea, right? I mean, 3betting light was a good idea surely or...?
2nd read and just about as many questions as I had after the 1st one. The questions below are about chapter 1 and 2, that will give you and idea how many questions I have about chapter 5...
YES/NO will do for most of them I think, I just want to make sure I get everything right before moving on.
Originally Posted by Chapter 1 p24, you write:
"Play on the flop may not change much if only a few select turn and river cards can actually come out. The cards that can come later in a hand, however, do greatly affect play earlier in the hand, so it is important that a variety of different kinds of cards be possible."
Originally Posted by Chapter 2 p50, you write:
"Also, Villain should always go all-in when checked to on the river since Hero is always check-folding."
Originally Posted by Chapter 2 p52, you continue saying:
"First, consider Hero’s strategy. His options are to fold and give up the pot or to go all-in."
Originally Posted by Chapter 2 p55, you write:
"His new exploitative strategy will give him an expectation greater than (or equal to) his equilibrium one. If it did not, then it would not be a maximally exploitative strategy."
Originally Posted by Chapter 2 p60, you write:
"And if this is true, it must be the case that Hero's winnings with other hands more than make up for it, on average.For example, GTO play certainly involves making bluffs. When a GTO strategy involves using a hand to bluff, it is because bluffing is at least as profitable as any other action we could take with that hand, at least when facing an unexploitable opponent. However, if we continue playing this strategy versus a particularly loose opponent, it may be that we begin to lose money with our bluffs since Villain is not folding enough. In this case, the profit we gain from our value hands more than makes up for it."
About the example p64:
Is the purpose of the example simply to show that playing exploitative without a very good read can prove costly?
Originally Posted by Chapter 2 p68, in the example where Hero decides to 3b light and show his bluffs, you say:
"Is it clear now whether his attempt to induce a change in Villain’s strategy was a good idea?"
K, so a raise to 7 BB apparently gets stacks to where we need them, and the next two actions will be BB calling his river starting range and folding everything else. Then we'll have the flop go check-check, turn go check-check, and then everything else is river play.
Again, tree.txt lists all the actions in a way that makes the tree structure more apparent, and each line of that file which contains an action corresponds to one of the ranges in the figure. This is all described in the README.txt file.
The ranges in tree.txt are only approximate. Of course an exact listing would need to specify a frequency to go along with every hand combo.
Again, tree.txt lists all the actions in a way that makes the tree structure more apparent, and each line of that file which contains an action corresponds to one of the ranges in the figure. This is all described in the README.txt file.
The ranges in tree.txt are only approximate. Of course an exact listing would need to specify a frequency to go along with every hand combo.
Thanks, Will!
Hey Will,
2nd read and just about as many questions as I had after the 1st one. The questions below are about chapter 1 and 2, that will give you and idea how many questions I have about chapter 5...
YES/NO will do for most of them I think, I just want to make sure I get everything right before moving on.
2nd read and just about as many questions as I had after the 1st one. The questions below are about chapter 1 and 2, that will give you and idea how many questions I have about chapter 5...
YES/NO will do for most of them I think, I just want to make sure I get everything right before moving on.
However, since we can't solve the full game, some approximations are necessary, and I'd argue that this one is at least reasonable if done carefully. This is not used in Vol 1, but will be in Vol 2 where we study much larger games.
Not sure if this answer is entirely satisfying, but if you want to allow more nuanced strategies, you have to start by writing down more complex decision trees, and that's something we do later in the book. The situation on pg 50 is sort of a simple early example.
Originally Posted by Chapter 2 p52, you continue saying:
"First, consider Hero’s strategy. His options are to fold and give up the pot or to go all-in."
"First, consider Hero’s strategy. His options are to fold and give up the pot or to go all-in."
The sentence you quote is from a paragraph where I'm discussing why the two strategies we've just found are maximally exploiting each other. If BB is always check-folding, it's clear that betting when checked to is at least as good for the SB than showing down, although the two moves have equal value if SB has 100% equity (i.e. the nuts that can't be chopping). So always betting when checked to is maximally exploitative.
Well, it's not clear what partially exploitative means exactly... certainly you could play some spots well and other spots poorly and end up doing worse than at equilibrium.
In the last sentence, when you say that the profit we gain from our value hands more than makes up for it, your reference point is our equilibrium value, right? You're saying that we will make at least as much money as we would at equilibrium (probably more). But a maximally exploitive strategy could still make us even more money, against such an opponent. Is that correct?
Yes
Moving on to chapter 3... Thanks a lot Will.
82s and 82o also appearing in hero's range
Hey Will, Amazon tells that ur 2nd boook is coming out on April 15. Are they right this time or again u have no idea where this date is coming from?
Yes, that should be correct.
Ohh man its half year later than i exspected Super excited about this 1 . God bless Will. 3 random sentences describing my feelings.
Amazing book, amazing thread!
Will, Im not done with the whole book yet, but as long as I maybe dumb and/or wrong, please verify if i understood corectly how a GTO strategy is constructed and its attributes:
Up until reading this in your book:
I was under the impression that a GTO strategy doesnt care/know about the opponent's ranges which is kinda only partially true?
In fact a GTO strategy is like any other exploitative strategy just that it assumes he is up against THE best player - not the best player of any particular player pool or a moment in time/evolution of the game, but against the best player period which is capable of insta adjusting to max exploiting vs any strategy that he is facing.
So, being an exploitative strategy, it means that, at the time of its construction, a GTO strategy knows the opponent ranges which are in fact the "best response" ranges to whatever he is doing. After the strategy was constructed, he doesnt care anymore about opponent's ranges because he just achieved the maximum possible in terms of EV against the best player which is that he made him break even.
In conclusion, a GTO strategy is a static strategy, which has the best defense system there is. This doesnt mean that its attack system is worthless, in fact just the opposite, if the opponent doesnt play best defense also, GTO just crushes him.
Will, Im not done with the whole book yet, but as long as I maybe dumb and/or wrong, please verify if i understood corectly how a GTO strategy is constructed and its attributes:
Up until reading this in your book:
Both strategies are maximally exploiting the other, simultaneously.
In fact a GTO strategy is like any other exploitative strategy just that it assumes he is up against THE best player - not the best player of any particular player pool or a moment in time/evolution of the game, but against the best player period which is capable of insta adjusting to max exploiting vs any strategy that he is facing.
So, being an exploitative strategy, it means that, at the time of its construction, a GTO strategy knows the opponent ranges which are in fact the "best response" ranges to whatever he is doing. After the strategy was constructed, he doesnt care anymore about opponent's ranges because he just achieved the maximum possible in terms of EV against the best player which is that he made him break even.
In conclusion, a GTO strategy is a static strategy, which has the best defense system there is. This doesnt mean that its attack system is worthless, in fact just the opposite, if the opponent doesnt play best defense also, GTO just crushes him.
Amazing book, amazing thread!
Will, Im not done with the whole book yet, but as long as I maybe dumb and/or wrong, please verify if i understood corectly how a GTO strategy is constructed and its attributes:
Up until reading this in your book:
I was under the impression that a GTO strategy doesnt care/know about the opponent's ranges which is kinda only partially true?
In fact a GTO strategy is like any other exploitative strategy just that it assumes he is up against THE best player - not the best player of any particular player pool or a moment in time/evolution of the game, but against the best player period which is capable of insta adjusting to max exploiting vs any strategy that he is facing.
So, being an exploitative strategy, it means that, at the time of its construction, a GTO strategy knows the opponent ranges which are in fact the "best response" ranges to whatever he is doing. After the strategy was constructed, he doesnt care anymore about opponent's ranges because he just achieved the maximum possible in terms of EV against the best player which is that he made him break even.
In conclusion, a GTO strategy is a static strategy, which has the best defense system there is. This doesnt mean that its attack system is worthless, in fact just the opposite, if the opponent doesnt play best defense also, GTO just crushes him.
Will, Im not done with the whole book yet, but as long as I maybe dumb and/or wrong, please verify if i understood corectly how a GTO strategy is constructed and its attributes:
Up until reading this in your book:
Both strategies are maximally exploiting the other, simultaneously.
In fact a GTO strategy is like any other exploitative strategy just that it assumes he is up against THE best player - not the best player of any particular player pool or a moment in time/evolution of the game, but against the best player period which is capable of insta adjusting to max exploiting vs any strategy that he is facing.
So, being an exploitative strategy, it means that, at the time of its construction, a GTO strategy knows the opponent ranges which are in fact the "best response" ranges to whatever he is doing. After the strategy was constructed, he doesnt care anymore about opponent's ranges because he just achieved the maximum possible in terms of EV against the best player which is that he made him break even.
In conclusion, a GTO strategy is a static strategy, which has the best defense system there is. This doesnt mean that its attack system is worthless, in fact just the opposite, if the opponent doesnt play best defense also, GTO just crushes him.
The sentence you quoted is essentially the definition of an equilibrium (i.e. GTO) strategy. The part of your post that I bolded is equivalent. Your "best player" is max exploiting us since that's what he does, and of course the best we can do is to max exploit him back at the same time. And... that's what the quote says.
I don't know what strategies' attack and defense systems are. To be honest, I'm also not convinced that it's really helpful to anthromorphize here. A strategy, GTO or otherwise, is essentially a list of ranges or a list of frequencies. It doesn't "care" or "know" about anything, since it's an inanimate object.
I mean, I have a vague idea of what you might mean in both cases. And in some vague sense, I guess it's vaguely right, at least sometimes. But trying to understand things in terms of ill-defined concepts that don't really mean anything specific leads to fuzzy pseudo-understanding. Possibly useful for coctail party conversation, not at all useful for playing profitable poker.
A GTO strategy is a strategy that satisfies the definition above, no more and no less. It has some interesting properties that we can work out, beginning from that definition, that make it useful for real play.
Hope this doesn't come off as too down on your post . Please do let me know if it's still not clear. FWIW, I've also written some about these ideas in the terminology sticky in the Poker Theory subforum.
Hey Will,
Im trying to calculate the solutions to the following preflop game using the indifference principle.
Stacksize = 120BB
A) SB-opens 3BB : X frequency
B) BB-3bets 8BB : Defends 0.38
C) SB-4bets 20BB : Defends 0.36
D) BB-5Bets 50BB : Defends 0.39
E) SB-Shoves 120BB : Defends 0.4
F) BB- Call 70BB : Defends 0.41
BB needs can maximally exploit SB by calling with all hands > 29% equity ((170BB/70BB) = 2.4:1) and also needs to get to point E with (0.38*0.39*0.41) = 6.1% hands
I used the hand range calculator in equalab to find SB range satisfies both of the BB's conditions. SB needs a range of JJ+(3.0%) to give BB about 6.5% hands with > 29% Equity.
So if SB gets to point E with 3.0% they need to open with 21% hands.
Did I do this correctly? It seems like open folding 79% of hands at equilibrium seems like a lot.
Im trying to calculate the solutions to the following preflop game using the indifference principle.
Stacksize = 120BB
A) SB-opens 3BB : X frequency
B) BB-3bets 8BB : Defends 0.38
C) SB-4bets 20BB : Defends 0.36
D) BB-5Bets 50BB : Defends 0.39
E) SB-Shoves 120BB : Defends 0.4
F) BB- Call 70BB : Defends 0.41
BB needs can maximally exploit SB by calling with all hands > 29% equity ((170BB/70BB) = 2.4:1) and also needs to get to point E with (0.38*0.39*0.41) = 6.1% hands
I used the hand range calculator in equalab to find SB range satisfies both of the BB's conditions. SB needs a range of JJ+(3.0%) to give BB about 6.5% hands with > 29% Equity.
So if SB gets to point E with 3.0% they need to open with 21% hands.
Did I do this correctly? It seems like open folding 79% of hands at equilibrium seems like a lot.
I have one question about the topic that you should fold 100% of the time (and so let him win the whole pot) against a big bet which cant be balanced, because the opponent doesnt have enough bluffs to do so.
The thing I dont understand about this is that I think that this would be exploitable by simply always betting big with your bluffs and win the whole pot with them, but bet smaller with your value bets with which you obv win more than the pot on average.
And by that you would win with your whole range more than 100% of the pot on average, which like you explained in the book shouldnt be possible in theory.
I hope you can help me with that issue...
The thing I dont understand about this is that I think that this would be exploitable by simply always betting big with your bluffs and win the whole pot with them, but bet smaller with your value bets with which you obv win more than the pot on average.
And by that you would win with your whole range more than 100% of the pot on average, which like you explained in the book shouldnt be possible in theory.
I hope you can help me with that issue...
Hello Will,
First off all, thank-you for the book. I have an question regarding population tendencies. At page 65 and 66 you say that when you have no reads about your opponent it is best to play an GTO strategy. How would you perceive analysed population tendencies?
First off all, thank-you for the book. I have an question regarding population tendencies. At page 65 and 66 you say that when you have no reads about your opponent it is best to play an GTO strategy. How would you perceive analysed population tendencies?
Hey Will,
Im trying to calculate the solutions to the following preflop game using the indifference principle.
Stacksize = 120BB
A) SB-opens 3BB : X frequency
B) BB-3bets 8BB : Defends 0.38
C) SB-4bets 20BB : Defends 0.36
D) BB-5Bets 50BB : Defends 0.39
E) SB-Shoves 120BB : Defends 0.4
F) BB- Call 70BB : Defends 0.41
BB needs can maximally exploit SB by calling with all hands > 29% equity ((170BB/70BB) = 2.4:1) and also needs to get to point E with (0.38*0.39*0.41) = 6.1% hands
I used the hand range calculator in equalab to find SB range satisfies both of the BB's conditions. SB needs a range of JJ+(3.0%) to give BB about 6.5% hands with > 29% Equity.
So if SB gets to point E with 3.0% they need to open with 21% hands.
Did I do this correctly? It seems like open folding 79% of hands at equilibrium seems like a lot.
Im trying to calculate the solutions to the following preflop game using the indifference principle.
Stacksize = 120BB
A) SB-opens 3BB : X frequency
B) BB-3bets 8BB : Defends 0.38
C) SB-4bets 20BB : Defends 0.36
D) BB-5Bets 50BB : Defends 0.39
E) SB-Shoves 120BB : Defends 0.4
F) BB- Call 70BB : Defends 0.41
BB needs can maximally exploit SB by calling with all hands > 29% equity ((170BB/70BB) = 2.4:1) and also needs to get to point E with (0.38*0.39*0.41) = 6.1% hands
I used the hand range calculator in equalab to find SB range satisfies both of the BB's conditions. SB needs a range of JJ+(3.0%) to give BB about 6.5% hands with > 29% Equity.
So if SB gets to point E with 3.0% they need to open with 21% hands.
Did I do this correctly? It seems like open folding 79% of hands at equilibrium seems like a lot.
http://forumserver.twoplustwo.com/15...-game-1399242/
I have one question about the topic that you should fold 100% of the time (and so let him win the whole pot) against a big bet which cant be balanced, because the opponent doesnt have enough bluffs to do so.
The thing I dont understand about this is that I think that this would be exploitable by simply always betting big with your bluffs and win the whole pot with them, but bet smaller with your value bets with which you obv win more than the pot on average.
And by that you would win with your whole range more than 100% of the pot on average, which like you explained in the book shouldnt be possible in theory.
I hope you can help me with that issue...
The thing I dont understand about this is that I think that this would be exploitable by simply always betting big with your bluffs and win the whole pot with them, but bet smaller with your value bets with which you obv win more than the pot on average.
And by that you would win with your whole range more than 100% of the pot on average, which like you explained in the book shouldnt be possible in theory.
I hope you can help me with that issue...
Something similar can happen in multi-street games as well. Suppose somewhere early in a hand, Villain can bet and make us fold our entire range. But he checks instead. Then, always checking and folding later on (i.e. never putting any more money in the pot) must be at least co-optimal for us later in the hand.
Essentially, if you know an opponent comes from a particular population of players, and you know that population has certain tendencies, then your opponent isn't completely unknown. You have information about him that you can begin to adjust to (although you should be careful and quick to re-adjust if necessary).
We talk about beginning with GTO play and then adjusting once we get reads. However, in practice, we may make some of those adjustments before the beginning of a match, based on pop tendencies. I talk about this on pg 58. I like to say things this way to emphasize that all of our adjustments should be motivated/justified by some exploitable tendency of our opponent (or population of opponents).
As an example, I believe that at certain stack sizes (say, hyperturbos, 15bb deep) GTO SB play involves limping a very high percentage of our buttons. However, I also believe that much of the population is too tight in the BB facing a minr. So I keep open-raising there against new opponents until I'm given a reason not to.
Cheers
Yes, you're correct. If Villain has the option to make a big bet and a small bet, and his range is such that he can make us fold our entire range with the big bet, then we can't call a small bet either, or else Villain could exploit that by just betting small for value. I guess in the discussion of PVBC play in the book, we focused on models where the polar player chose a single bet sizing, but this is something interesting that can happen in more complicated cases.
Something similar can happen in multi-street games as well. Suppose somewhere early in a hand, Villain can bet and make us fold our entire range. But he checks instead. Then, always checking and folding later on (i.e. never putting any more money in the pot) must be at least co-optimal for us later in the hand.
Something similar can happen in multi-street games as well. Suppose somewhere early in a hand, Villain can bet and make us fold our entire range. But he checks instead. Then, always checking and folding later on (i.e. never putting any more money in the pot) must be at least co-optimal for us later in the hand.
But I wonder if this also works out if we can have some slowplays...
Feedback is used for internal purposes. LEARN MORE