01-17-2014 , 11:23 AM
There are, IIRC, 22100 unique flops. (We can reduce this number using suit isomorphisms, but that may or may not be convenient, and the big idea here will follow through even if we use them -- it just makes things more complicated to describe.) There are a couple reasons we might want to choose a relatively small subset of flops that somehow accurately represents the full set:
• we want to reduce the size of a decision tree describing some model poker game
• we want to study in depth how players' ranges interact, etc. on a representative subset boards

To ensure that our subset accurately represents the real game, we might want it to reproduce certain properties of the full set of flops, such as the frequencies that:
• any particular single card comes.
• a flush draw of a particular suit comes.
• a monotone flop of a particular suit comes.
• a paired board with the pair being a particular rank comes.
• a 3-straight board of any rank comes.
• a board with any particular one-gap in the ranks comes.
• a board with any particular two-gap in the ranks comes.

How can we choose such a subset of flops?

Suppose there are P properties we want to be correct, e.g. the frequency of A being dealt, the frequency of K being dealt, the frequency that a flop is monotone in spades, and so on. Make a vector, B, having the correct frequencies, in some fixed order. For example, the correct frequency with which a flop will contain the A is

nchoosek(51,2) / nchoosek(52,3) = 1275 / 22100 = 0.057692

Next, number the flops from 1 to 21000, and make a Px22100 matrix A such that A_{i,j} = 1 if flop j has property i, and 0 otherwise. So, A is just a matrix of zeros and ones indicating which flops have which properties.

Then, let x be a vector of length 22100 specifying how often to deal each flop. For example, to represent dealing every flop with equal probability, each entry in this vector will be 1/22100. If we wanted to deal a single flop 100% of the time, then the entry corresponding to that board would be 1 and all the other entries would be 0.

We'd like to find an x (which indicates a set of flops and frequencies to deal each of them) such that it has the properties of the full game as described above and also such that most entries are 0, so that we don't have to deal with too many boards.

x will correctly reproduce the frequencies of the full game if Ax=B. Any particular row of A, times x, gives the frequency that a random draw from our subset of flops has a particular property, and the corresponding row of B has the correct value from the full game. Take a moment to convince yourself of this. Certainly, if x has all entries 1/22100, it satisfies the equation by definition of B, but we can also imagine solving it with a vector with many fewer non-zero entries.

So we need to solve Ax=B. This is kind of like the classic least squares problem, except that we want all entries in our solution vector x to be nonnegative, since it doesn't make sense to deal a flop with a negative frequency. Some code that solves nonnegative least squares problems is available here:

https://software.sandia.gov/appspack...8c-source.html

A (non-unique) set of flops and dealing frequencies satisfying the constraints in the list above is here:

http://pastebin.com/XKLYhsSt

More constraints might lead to a set which is larger but more faithful to the full game.
01-17-2014 , 01:28 PM
01-17-2014 , 01:59 PM
Very interesting and well conceptualized. So, your ultimate goal is to come up with a relatively small set of flops, dealt at precise frequencies, so as to sufficiently aproximate the full set of all possible flops to (hopefully) create a solvable game that is very close to actual holdem?
01-17-2014 , 03:01 PM
Interesting.

How do you go to the turn?

You start out with certain 'turn properties' that you also include into this flop least square approach or you just study the turn simular way but now start out with the calculate flop subset where upon you perform least square approach?
01-17-2014 , 03:27 PM

I'm surprised you were able to get a set as small as 103 flops that satisfied all these constraints.

A couple of questions:

1) Is there some property of the solver that causes it to prefer solutions for B that are sparse (mostly zero)? I didn't see any reason why it should. And, if not, how did you manage to get such a sparse solution?

2) It appears this solver only computes exact solutions. It seems like you might want a solver that produces sparse solutions for B that "come close" to satisfying all the constraints.

I'm thinking maybe you actually used a different solver than the one you linked to?
01-17-2014 , 03:56 PM
Quote:
Originally Posted by _Storch_
tyty!

Quote:
Originally Posted by Paul V
Very interesting and well conceptualized. So, your ultimate goal is to come up with a relatively small set of flops, dealt at precise frequencies, so as to sufficiently aproximate the full set of all possible flops to (hopefully) create a solvable game that is very close to actual holdem?
Yea, I've used this to come up with tractable model games. The hope is to model postflop play well enough to get realistic preflop strategies. That is, we'd like the EV of seeing a flop here to be the same as in the real game.

Quote:
Originally Posted by Emus
Interesting.

How do you go to the turn?

You start out with certain 'turn properties' that you also include into this flop least square approach or you just study the turn simular way but now start out with the calculate flop subset where upon you perform least square approach?
Well, if we didn't want to deal all possible turn and river cards following each flop, choosing a dozen or so randomly might not be all that bad. If we made different random choices for every single way we get to the later streets (of which there are many), then, when on early streets, we'd still expect to see any particular later street card w/ about the right frequency. (Not saying there's no problems with this, but I think it's not terrible.)

We could also imagine doing something smarter in the vein of this flop stuff. E.g. we could ensure the right frequency of any particular rank coming on the turn, etc. I've actually just chosen random turns and rivers, but I agree this would be better.

Quote:
Originally Posted by egj

I'm surprised you were able to get a set as small as 103 flops that satisfied all these constraints.

A couple of questions:

1) Is there some property of the solver that causes it to prefer solutions for B that are sparse (mostly zero)? I didn't see any reason why it should. And, if not, how did you manage to get such a sparse solution?
I didn't write the solver, and that's just the way the solution came out, so yea it looks that way. I don't think it's surprising, though. The system is very under-determined. Specifying that each card comes with the right frequency gives us 52 linear constraints, right frequencies of monotone flops give us 4 linear constraints, etc, and we end up with a bit over 100 constraints total. So if NNLS is like LS, it's no surprise that we need about the same number of nonzero coefficients to satisfy them.

Quote:
Originally Posted by egj
2) It appears this solver only computes exact solutions. It seems like you might want a solver that produces sparse solutions for B that "come close" to satisfying all the constraints.

I'm thinking maybe you actually used a different solver than the one you linked to?
The first time I looked at this problem, I implemented a genetic algorithm to solve it, and it could look for solutions like this, but it didn't work very well. I agree it'd be an interesting way to go.

It's been a few years since I did this, but I'm pretty sure I used the code that I linked to.
01-17-2014 , 04:40 PM
i hope regulated us poker actually has some way to detect and destroy botters
01-17-2014 , 04:41 PM
wow that's pretty complex. my feable mind can only recognize flops in a few variables:

rag flop (no high cards, uncoordinated, likely to miss villains range)
paired flop & trips flop (88x...999)

then there's draw potential, for most flops listed above:

flush potential flop (2+ one suit)
straight potential (connected cards, possibly in villains range)

i'm sure i missed a bunch of possible situations, but those types of flops come to mind when i think of minimizing them to a subset. i'm sure your formula will do a much more precise job tho!
01-17-2014 , 05:14 PM
Quote:
Originally Posted by DjSkyy
i hope regulated us poker actually has some way to detect and destroy botters
just because you don't understand something doesn't someone's trying to cheat
01-21-2014 , 03:22 AM
I have been thinking about this as well trying to create maybe a dozen archetype flops to build some strategies. But I am not a math guy and don't understand a lot of your post.

Could you please explain to a layman?

The 103 flops are supposed to represent the game so I was thinking I would start looking at the most common flops first. Is the number in front of each flop(from link) the frequency of that flop?

thanks
01-21-2014 , 06:04 PM
Quote:
The 103 flops are supposed to represent the game so I was thinking I would start looking at the most common flops first. Is the number in front of each flop(from link) the frequency of that flop?
yes.

Quote:
Originally Posted by mike1270
I have been thinking about this as well trying to create maybe a dozen archetype flops to build some strategies. But I am not a math guy and don't understand a lot of your post.

Could you please explain to a layman?
Sure, so the big idea behind my use of this stuff is as follows --

We can imagine drawing the entire game of HUNL as a large decision tree where each point represents a spot where a player has to make a decision, and each possible decision leads to another decision point, etc. This tree contains every possible spot in HUNL, so it's huge. (PM if you want a fuller description of this.)

So to do something useful with a decision tree, we have to approximations to make it smaller. Like, we limit players to use only a few betsizings, say, half pot, full pot, and all-in. We reduce or combine the different hole card possibilities.

We can also make the game smaller by limiting the possible board runouts. But if we aren't careful about how we do this, it could lead to nonsense. Like, if only two flops are possible: KJ7r and K42r, then we will find that any hand with a K,J,7,4, or 2 is valuable preflop, and other cards aren't. 54s will play preflop as if it tends to make a mediocre made hands or air (since it does, half the time each) and not like it tends to make draws, which it does in the real game. So we need to choose our set of flops carefully. Maybe we choose them so that they reproduce properties of the real game, like the chance any particular card comes, etc.

The rest of the math up there is just a way to express those constraints as a system of equations, and then solving those equations gives us a set of frequencies with which to deal some set of flops so that we get all those frequencies right.
01-21-2014 , 10:59 PM
thanks

Since my purpose and time are limited to do you have any suggestion on how to choice a smaller number of representative flops? I am thinking about it from a 80/20 perspective right now so I am looking for a smaller number of flops that might represent 80% of the game? For example; maybe only one flop that is 8 high as these flops are less common than A,K,Q high flops.

thoughts? suggestions?
01-25-2014 , 02:20 PM
Quote:
Originally Posted by mike1270
thanks

Since my purpose and time are limited to do you have any suggestion on how to choice a smaller number of representative flops? I am thinking about it from a 80/20 perspective right now so I am looking for a smaller number of flops that might represent 80% of the game? For example; maybe only one flop that is 8 high as these flops are less common than A,K,Q high flops.

thoughts? suggestions?
Tbh, my experience is that small changes in the flop can have larger effects than you might expect on how players' ranges interact. E.g. J73r can be a lot different than T73r. This is especially true if ranges were defined somewhat preflop, as in a 3bet pot.

I think that poker is a game of small edges that add up, and it's important to understand these subtle effects to find those edges.

I guess what I'm trying to say is that I think it's dangerous to group too many not-quite-alike things together. It might be best to just accept that there are lots of strategically-unique boards. By studying a few of them in depth, you'll get a feel for the details that turn out to be important, and it'll be easier to understand others on the fly.

In any case, I haven't really done any other work on grouping flops...
01-25-2014 , 09:31 PM
01-27-2014 , 02:55 PM
Great post/topic!

I think using suit isomorphisms would definitely make sense here, you should be able to further reduce the # of flops considerably. (Full game is only 1755 flops when using isomorphisms, fwiw.)

I guess the modified conditions look something like this:
Quote:
• monotone/2-suited/rainbow board comes.
• any particular rank comes.
• a paired board with the pair being a particular rank comes.
• a 3-straight board of any rank comes.
• a board with any particular one-gap in the ranks comes.
• [any particular combination of two ranks comes.]?*
*This could replace the single rank & paired board conditions, and also ensures that all 2-pairs are hit with the correct frequency.

Last edited by plexiq; 01-27-2014 at 03:07 PM.
03-28-2014 , 07:15 AM
Among flops that you have listed can you confirm that some carry on multiple properties?
For example
------ FLOP 0.0016284 3c2c4d
contains both a 3-straight and a flush draw, but there is no rainbow board with the same ranks.
I am trying to use your subset of flops to study my own game, have you tried to generate different subset of flops (smaller)?
03-28-2014 , 08:58 AM
Interesting topic. I could see lots of applications. For instance, it seems like something like this could also be used to seed Monte Carlo runs to achieve faster convergence.

Have you considered how the pre-flop action could skew the frequencies of certain boards? For instance several limpers could indicate more aces and dueces remaining in the stub on average with fewer mid value cards, or button vs blinds battles, when all previous players folded could result in boards with more As and Ks.

I have a method that can estimate the effects of card removal based on distributing each players hand across their playing or folding range and the impact can be fairly significant.

For example if you have a good sample of open PFR frequencies for a player on the button showing 45%, the 6 or 7 open folds that must always occur first makes the deck more heavily weighted in higher cards, especially aces. So a 45% frequency would actually be representative of something around a 40% range.

Last edited by TakenItEasy; 03-28-2014 at 09:03 AM.
03-28-2014 , 12:34 PM
Quote:
Originally Posted by Qlka
Among flops that you have listed can you confirm that some carry on multiple properties?
Yes of course

Quote:
Originally Posted by Qlka
I am trying to use your subset of flops to study my own game, have you tried to generate different subset of flops (smaller)?
yes, if we want to satisfy less constraints we can do so with less boards
03-28-2014 , 12:39 PM
Quote:
Originally Posted by TakenItEasy

Have you considered how the pre-flop action could skew the frequencies of certain boards?
yea you can think of the frequencies given as sort of the baseline chance of a particular board being dealt, just like in the real game, the chance without any info about players ranges is exactly 1/22600 or w/e. in applications, card removal effects may affect those numbers somewhat.
03-28-2014 , 01:47 PM
Wait so why are the flops from 94 ranked in a weird order ? 93 is the flop type that would be the most frequent right ?
03-28-2014 , 07:26 PM
Quote:
Originally Posted by yaqh
yea you can think of the frequencies given as sort of the baseline chance of a particular board being dealt, just like in the real game, the chance without any info about players ranges is exactly 1/22600 or w/e. in applications, card removal effects may affect those numbers somewhat.
I was actually thinking about how applicable using pre-prescribed board data would be in an application I'm creating that actually can show the impacts of card removal from players live ranges as well as folding ranges.

Setting that aside, for now. I'm in the process of creating a kind of simulator or trainer app that will see many flop situations quickly.

My plan was to create a simple pre-flop bot to get me to the flop quickly but using a shuffler that would be dealing randomly except for the users hand that could be pre-defined or chosen randomly from a playable range set up by the user. That would eliminate all of the pre-flop mucked hands.

I would pre-assign opponents ranges from a large library of ranges for various raising/calling ranges and assigning a different set of ranges to each player based on position and style. The user would also assign a set of ranges for thier own pre-flop action. Therefore I could eliminate the time spent on the pre-flop but still analyse flops vs somewhat realistic ranges and still have the context of the pre-flop action which is important for realistic simulations including reads and pot sizes.

Of course I'd be missing all of the out of line action from preflop players for squeeze plays or light 2/bet 3/bet situations but I'm more concerned with deep stack applications where the pre-flop is mostly trivial in the games I play, often with 200 bb+ stacks.

To analyse the flop quickly, all ranges will be broken down and displayed graphically in such a way that I could see relative changes between flops easily.

I could also run batch Monte-Carlo flops to get pre-flop to flop equity perhaps showing how the changes in equity will change with relation to draws and such with some kind of frequency data related to those changes.

This wouldn't take into account anything but "standard" pre-flop action of course but, I'm not that concerned with analyzing pre-flop poker. I could add some bluff range percentages but I'd be speculating a little too much on ranges that could end up being less helpful than more.

Of course using predefined board textures that could represent full game analysis may be ideal for this but the problem I see is that they are presumably dealt based on a full deck all all hole cards would be dealt after the distribution of boards were created.

Therefore I was wondering how legitimate or practical this method would be for my situation. To begin with it would be as if the hole cards were dealt from one deck and the board cards would be dealt from a second deck which would be ridiculous. Even if it eliminated card redundancy by eliminating flops, the full game experience would suddenly have many holes.

After thinking about this a little more, I think I have a solution that could work for single hands though batch runs for given hole cards still wouldn't work.

If a specific flop is randomly selected from the set of flops first following whatever frequency rules required, then flop cards could be removed from the deck before dealing hole cards. Even though it's backwards, I believe it may still be a legitimate simulation that could work for single hand with with a given set of hole cards. Batch runs would only work for given sets of range definitions.

While running batch runs for the opponents hands are less important than running against ranges, it's still important to be able to do this analysis for defined player hands but again even a single hand chosen would create issues on many flops for batch runs.

Perhaps some fuzzy data could be used where each flop would have alternatives that could be autoselected to avoid at least 2 hero cards.

Last edited by TakenItEasy; 03-28-2014 at 07:55 PM.
03-29-2014 , 11:57 AM
Very interesting stuff! 103 surely is a good improvement on 1,755. I guess my only question relates to the psychological side: this number might be easier for a bot to handle, but it's still too many to really have an accurate psychological model of that relates to the important causal factors. Have you done any work on this side?
03-29-2014 , 01:00 PM
After looking at the list the ranks and suits are not all represented equally. It seems like the Ten will be much more valuable than the Ace.

Also the frequencies don't seem to make sense. The TTT should be the least likely but it's ranked at #15?

Maybe that was a random choice for a rainbow flop? I think your rules need to be more descriptive.
04-08-2014 , 09:37 PM
After looking at the list the ranks and suits are not all represented equally. It seems like the Ten will be much more valuable than the Ace.

Also the frequencies don't seem to make sense. The TTT should be the least likely but it's ranked at #15?

Would be interested in answer too.
03-15-2015 , 03:04 PM