A few posters, such as punter11235, claimed that there is better software available on the market for solving endgames given ranges for both players (perhaps software that fully accounts for card removal). I once looked into this and my understanding was that the best tool assumed just one bet size for all situations. While this may work very well for post-mortem analysis of human poker play, it's pretty clear that an agent that assumed just one bet size was available for the opponent would get creamed playing against humans of this caliber. The humans were certainly willing to make very small bets or huge overbets, particularly if they thought our algorithm had a weakness in responding to those. So we opted to use many different bet sizes to protect ourselves from bet-size exploitation (the version we used at the beginning had 8 different sizes for the first river bet, plus fold/check), at the expense of having to use some card abstraction and not fully account for card removal. Some of the humans informed me that there's software available now that uses 2-3 sizes and possibly doesn't use any card abstraction. I still think that using 8 sizes with card abstraction is much better against top humans than just 2-3 without it, though it would be interesting to run comprehensive experiments to test this. So I'm not convinced that there actually exists other software out there that is better for this than the approach we used, though I have not done a thorough investigation, and would be happy to hear from people familiar with the state-of-the-art tools.
There are 3 commercially available post flop solvers and I am positive that at least two of them (and I believe all 3) fully account for card removal and use no abstractions. They have different #s of bet sizes they allow (I know at least one of them lets you construct any game tree you want) and they have different performance given the number of bet sizes / stack depth / street. As far as I know there was never a commercial product for this that only allowed one bet size. I'm reasonably sure for river only stuff you could use 8 bet sizes with no problem, although I'm not sure exactly how fast you need it to be.
I'm happy to discuss this more over PM if you are interested.
I forgot about this thread but now, thanks to swc123 I see you quoted some of my statements with some skeptical remarks. For example this one:
A few posters, such as punter11235, claimed that there is better software available on the market for solving endgames given ranges for both players (perhaps software that fully accounts for card removal). I once looked into this and my understanding was that the best tool assumed just one bet size for all situations. While this may work very well for post-mortem analysis of human poker play, it's pretty clear that an agent that assumed just one bet size was available for the opponent would get creamed playing against humans of this caliber.
I don't know, I've just checked and solving rivers with 7x pot size stack, 6 bet sizes and 4 raise sizes takes 5 seconds on my solver (not with default settings which are geared towards flop cases but the mechanism to change them is available). That is to about 0.2% of the pot.
We also don't do any multithreading on the river because it was never needed so those are results from one core.
Obviously having 32 cores for that and it being a vital part of bot performance it should be trivial to get it below 1 second and way better accuracy.
It goes without saying that there is no abstraction, lossy bucketting and blockers are taken into account. I don't consider anything else "solving".
I will leave rest of the claims for now but all my statements in this thread stand and tbh aren't a revelation for anyone involved in serious poker programming.
Last edited by punter11235; 06-08-2015 at 04:32 AM.
In the competition it was mentioned that that competitors 'weren't allowed to go to the restroom by themselves' - was that to prevent strategy discussion during play or other forms of cheating? Also were people significantly wagering on the outcomes of whether the humans or AI would win?
This was a good analysis of the event. It was A4s though, not A4o
PokerStars Hand #531022966809: Hold'em No Limit ($50/$100 USD) - 2015/05/05 18:21:14 ET
Table 'ACPC Claudico_Polk vs. DougPolk' 2-max Seat #2 is the button
Seat 1: Claudico_Polk ($20000 in chips)
Seat 2: DougPolk ($20000 in chips)
DougPolk: posts small blind $50
Claudico_Polk: posts big blind $100
*** HOLE CARDS ***
DougPolk: raises $100 to $366
Claudico_Polk: raises $366 to $1098
DougPolk: raises $1098 to $4018
Claudico_Polk: raises $4018 to $10045
DougPolk: raises $10045 to $20000 and is all-in
Uncalled bet ($9955) returned to DougPolk
DougPolk collected $20090 from pot
*** SUMMARY ***
Total pot $20090 | Rake $0
Seat 1: Claudico_Polk showed [Ad 4d] and lost with the hand
Seat 2: DougPolk showed [9d 9s] and won ($20090) with the hand
"We came to this conclusion ourselves as well during the competition, and for this reason decided to take out the
large bets for ourselves from the endgame solver partway through the competition, since this issue is most
problematic for those bet sizes"
i thought they were not able to adjust the bot throughout the competition, that seems like a pretty unfair thing to do if you want to calculate a winrate of a pre-programmed bot and then brag about the match being a statistical tie vs 4 of the best players in the world while adjusting and fixing it half-way through.
liked the analysis very much, although sometimes it seemed like it was written for poker illiterates.
^^yeah can't imagine what the point would be of a competition where the bot creators couldn't tweak the bot.
anyway the "competition" wasn't for bragging rights, it was for research. think the professor making a dumb comment about a statistical tie is muddling peoples' perceptions of the intentions of the project, which if you read Sam's paper in its entirety are clearly to learn and improve on AI for solving complex real-world problems (generally speaking).