Open Side Menu Go to the Top
Register
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. The limits of discrete mathematical objects for modelling bet sizes vs continuous objects.

03-01-2024 , 12:13 AM
The approach towards modelling bet sizes in No Limit "Solvers" today seems to be based on a discrete objects, namely a finite sets of actions in a single spot, rather than continuous objects, like an infinite set of actions, a distribution or a continuous polynomial.

This quantification of actions may be due to the iterative approach to solving the game through Limit Hold Em. Where these discrete approaches found great success due to the discrete nature of bet sizings, where the bet sizes were themselves finite.

The fact that solvers require predefining bet sizes is a limiting vestige from Limit Hold Em, I would like to see an approach that considers the whole infinite range of possible actions of both the player and the opponent.

we need to break free from such simplifications and be precise about bet sizings. We need not ask whether we would call a bet, but rather how big a bet needs to be in order to fold. We need tothink of our GTO strategies NOT as a finite mix of folds and checks and raises, but as a probability distribution of raise sizes with a mode, variation and other properties of distributions. We might even think of our strategies as polynomials or as complex non-continuous functions, who is to say that our strategies should be continuous? Perhaps there is something special about a pot size bet or other bet sizings, whether because of convention or because of a deep mathematic property like symmetry, we may consider folding 1.01x but calling 1x, or viceversa!

A possible toy game to consider is Continuous Limit, where we always have the option to bet 1 blind, 2 blinds or anything in between. Of course the narrower the range of bets, the less likely it is that ideal bet sizes fall in the middle, and that the difference will be of consequence. But it would allow us to try continuous modelling of bet sizes on a simpler game.

In short, Poker Solvers should consider bet sizes as the continuous choice that they are, rather than simplifying by quantifying into approximations. This would not only be more precise, but also may be faster.

Regards, Tom.

P.S: 1000th post!

Last edited by LoveThee; 03-01-2024 at 12:18 AM.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-01-2024 , 12:48 AM
Poker is too complex.

The methods for solving poker with continuous bet sizing already exist. There are several academic papers about it. In fact, it's more straightforward than the algorithms we use today (it basically involves solving a gigantic payoff matrix). However, it's far too computationally intensive. Even very simple versions of poker quickly become intractable.

The problem is that you can't solve one action in a vacuum. The decision to call a flop bet is tied to the gamespace of all the possible turns and rivers. You can't solve range vs range either. Because of blocker effects, each hand is effectively up against a different opposing range. This further explodes the gamespace.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-01-2024 , 07:44 AM
Quantum computing will be mainstream by 2030. Wonder how close we'll be able to get to truly solving poker then.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-01-2024 , 08:26 AM
Quote:
Originally Posted by tombos21
Poker is too complex.
This seems overly pessimistic, it's not even that complex compared to actual daily challenges humans face, like building a bridge. You know what the rules are, the decisions are limited and spoonfed to Hero. I'm not saying we should perfectly solve the game, just analyze it with this slight improvement in the mathematical model.

Quote:
Originally Posted by tombos21
The methods for solving poker with continuous bet sizing already exist. There are several academic papers about it. In fact, it's more straightforward than the algorithms we use today (it basically involves solving a gigantic payoff matrix). However, it's far too computationally intensive. Even very simple versions of poker quickly become intractable.
I'd be interesting on reading those papers, if you have any specific details on the source, or if you vaguely remember where you read them, let me know.

Quote:
Originally Posted by tombos21
The problem is that you can't solve one action in a vacuum. The decision to call a flop bet is tied to the gamespace of all the possible turns and rivers. You can't solve range vs range either. Because of blocker effects, each hand is effectively up against a different opposing range. This further explodes the gamespace.

Of course, this is another common approach to solving a hand, which I am not challenging. Other than increasing the complexity of the solve, I do not see how it is relevant to the topic at hand.

Additionally, my argument is that making the bet sizings continuous would reduce the complexity of solves as well, since instead of needing to consider 3 or more different bet sizes, it would need to consider one distribution with a single mean. Of course it could become bimodal after a street, but it would create a tree with 2 branches each, rather than 3+.


Quote:
Originally Posted by dude45
Quantum computing will be mainstream by 2030. Wonder how close we'll be able to get to truly solving poker then.
I don't think more computing would make a difference without reframing the approach, what would the impact of x10 or x100 strength in computing power really be? Consider that this can be accomplished by letting computers run 10 or 100 times longer, or just buying 10 times the equipment.

Another factor to consider is that Poker is far from the most pressing dilemma faced by man, the state of the art computing tech is first used in other domains. Poker is several generations behind, we haven't really caught up to GPU computing, there's almost no Machine Learning approaches that I know of, or they have a super low investment.

Consider for example Shattered, when Google broke SHA1, or AlphaZero where they dominated chess. They applied millions of servers in computing power to the challenge costing in excess of millions of dollars. No one will do that for poker.

When quantum computing is released, maybe Poker will already be adopting GPUs for calculations instead of CPUs
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-01-2024 , 10:20 AM
Its a cool idea, but in my opinion, ultimately not that useful.
Good study tools and methods give you stuff you can remember and help you visualize a (imperfect) model of the game. In fact solvers are moving in the direction of being less mathematically exact and quicker/more memorable outputs.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-06-2024 , 12:26 AM
In real life, bet sizing is not continuous. Show me any real poker game where you can bet any amount other than a finite number of pennies (or whatever the smallest unit of currency in that country is).

A line from a guy I went to grad school with comes to mind (paraphrasing): "The human mind doesn't always understand how big finite things are allowed to be." Discrete vs. continuous isn't really the issue. The issue is that current computing power doesn't let your finite space get big enough.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-07-2024 , 07:20 AM
Quote:
Originally Posted by CallMeVernon
In real life, bet sizing is not continuous. Show me any real poker game where you can bet any amount other than a finite number of pennies (or whatever the smallest unit of currency in that country is).

A line from a guy I went to grad school with comes to mind (paraphrasing): "The human mind doesn't always understand how big finite things are allowed to be." Discrete vs. continuous isn't really the issue. The issue is that current computing power doesn't let your finite space get big enough.
Ok Democritus.

We don't necessarily use continuous mathematics to accurately represent continuous phenomenon, but also to simplify discrete phenomenons of magnitudes too large to understand by individual consideration of the parts.

We don't consider each individual cent bet size in the same manner that we don't consider each atom in physics when we do thermodynamics, continuum mechanics or even classical mechanics, but we simplify objects to continuous masses, and thus we are able to achieve better results.

We see a similar paradox of achieving higher quality in the long run by foregoing the expectation of exactitude when we consider analogue vs digital information processing. Analog storage has the immediate promise of capturing phenomenon in relatively continuous medium, while storing them as binary structures supposes a loss of quality. However in the long run, degradation is much easily avoided in binary medium, and the fidelity when copying binary medium is 100%, this is why analog media like VHS is of such poor quality, and the only remaining such media was converted to digital a long time ago.

Quote:
Originally Posted by aner0
Its a cool idea, but in my opinion, ultimately not that useful.
Good study tools and methods give you stuff you can remember and help you visualize a (imperfect) model of the game. In fact solvers are moving in the direction of being less mathematically exact and quicker/more memorable outputs.
See above, you achieve more memorable outputs by simplifying discrete phenomena to continuous models. Continuous analysis foregoes the expectation of exactitude and by doing so, ends up being much more precise.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-07-2024 , 05:51 PM
Quote:
Originally Posted by LoveThee
See above, you achieve more memorable outputs by simplifying discrete phenomena to continuous models. Continuous analysis foregoes the expectation of exactitude and by doing so, ends up being much more precise.
Off of my own experience being a player and a coach, I'm going to have to disagree.
I know how spots are roughly supposed to look in real theory, there's going to be somewhat of a sizing bellcurve in each node, on some nodes it may have multiple bellcurve looking poles. I don't need to look at a solver to know that.

In practice, you're playing one hand at a time, and this continuous approach would lead to RNGing every single decision and assigning a different sizing to each numbre on the RNG.
If you want to jerk off about how smart your strategy is, you can do that, but you won't be able to build a broad enough mental map of the game as much as someone who has a more schematic approach would.

It's the same as why we learn better by categorising hands in buckets like "value", "bluffs", "draws", etc, even though the actual mathematical explanations would be continuous and not discrete for each hand class. You're not a computer, you're an animal that can only fit things inside a tiny little monkey brain by slicing them and compacting them.

Last edited by aner0; 03-07-2024 at 05:56 PM.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-08-2024 , 12:05 AM
Quote:
Originally Posted by aner0
Off of my own experience being a player and a coach, I'm going to have to disagree.
I know how spots are roughly supposed to look in real theory, there's going to be somewhat of a sizing bellcurve in each node, on some nodes it may have multiple bellcurve looking poles. I don't need to look at a solver to know that.

In practice, you're playing one hand at a time, and this continuous approach would lead to RNGing every single decision and assigning a different sizing to each numbre on the RNG.
If you want to jerk off about how smart your strategy is, you can do that, but you won't be able to build a broad enough mental map of the game as much as someone who has a more schematic approach would.

It's the same as why we learn better by categorising hands in buckets like "value", "bluffs", "draws", etc, even though the actual mathematical explanations would be continuous and not discrete for each hand class. You're not a computer, you're an animal that can only fit things inside a tiny little monkey brain by slicing them and compacting them.
What do you mean by node? Some solvers refer to spots as nodes, but I think in this case you mean a mode correct? As in the property of a distribution in statistics.

Something that may not be clear from OP is that I am a computer programmer so I'm approaching this from the perspective of someone that can build and modify solvers. So I don't particularly need the system the be simple enough for a human to understand, although I am interested in those kinds of solves for games like chess and GPS routing as well.

That being cleared, humans can form their own mini mental models of computer models by sufficient interaction with them, most of the top chess players right now were born into the computing era and have played with engines since they were little. The same can be said of GPS routing, but only if you are paying attention I guess, it's also possible to become dependent on the engine if you use it an ends, but that's where this particular application stops being comparable, there are no rules saying we can't navigate without google maps, so we have no incentive to lose our dependency on it.

So in short, AI systems don't NEED to be designed with human comprehensibility as a consideration, devs can focus on beating the system with raw power, and humans will learn by exposure.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-08-2024 , 12:20 AM
So multimodality is clear yes, whether we are talking the bet sizing distribution component of our strategy, or whether we are talking ranges of our opponent. That's like baby stuff, try to keep up.

Something else to consider, bet sizing isn't the only object we can analize with Real mathematics, but ranges as well. If we assume a given ranking, we don't need to consider our opponent's matrix of individual hands, but rather a continuous range of hands,( a preflop range can be simplified to 30% rather than a matrix, perhaps with a polarizing and some other variable) Granted this is only useful in the first and last street, branching from both ends and reaching maximal complexity towards the center, we've seen this approach in chess as well with Opening databases and Endgame tablebases.

In linear algebra this would be a constructor approach, you would deal with the constructor of the matrix when possible, expanding the matrix only when necessary (when the flop branches out the showdown equity of your and your opponent's range. Polarization is only really more relevant near the river, or in preflop when shortstacked.

Last edited by LoveThee; 03-08-2024 at 12:29 AM.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-08-2024 , 12:22 AM
Quote:
Originally Posted by aner0
In practice, you're playing one hand at a time
This is categorically wrong, both in computational analyisis and amateur play or above. Even the most basic player thinks about villain's possible hands. And even amateur players think about the range that they represent.

Last edited by LoveThee; 03-08-2024 at 12:29 AM.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-08-2024 , 11:16 AM
Quote:
Originally Posted by LoveThee
This is categorically wrong, both in computational analyisis and amateur play or above. Even the most basic player thinks about villain's possible hands. And even amateur players think about the range that they represent.
if you say so
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote
03-10-2024 , 06:32 PM
I do like the idea of having a sort of sliding bet size scale built into a solver so that you can tell exactly at what bet size threshold a call becomes a fold, etc. There might be a way to accomplish this in the near future with a solver that samples a multitude of bet sizes and then maps the results and fills in the gaps with a linear regression. It would be more complicated then a simple linear regression though due to the downstream effects Anero brought up.

You would ultimately have to model each possible bet size, or model enough to create a linear model as I am suggesting. I'm not sure what you mean about only having to model the mean instead of multiple bet sizes. Modeling the mean is not enough.

Modern solvers take a single bet size and then map the range of all responses within a given set of parameters (call, fold, raise to x, raise to y, etc.) all the way up to the river. Adding bet sizes to early streets increases the complexity of the solve exponentially, and quickly exceeds the capacity of current generation computers.

I think maybe you're assuming that only the hand ranking is relevant, which would enable you to create a algebraic equation to maximize EV. That's not the case though as the way the rankings interact with our opponent's range is relevant. Middle pair and a flush draw might have similar raw equity vs. our opponent's range, but the optimal response based on the way that equity is disbursed over every possible runout can be drastically different.
The limits of discrete mathematical objects for modelling bet sizes vs continuous objects. Quote

      
m