Open Side Menu Go to the Top
Register
New algorithm for approximating ICM equities New algorithm for approximating ICM equities

10-14-2011 , 05:42 AM
Hi,

I'm a recent PhD graduate from Cambridge University and have been working on new algorithm for calculating ICM equities as the current Malmuth-Harville model can be somewhat inaccurate. I've produced something I'm fairly proud of, and testing has consistently showed it to be significantly more accurate than the current standard.

Details / justifications / comparisons can be found here:
https://viewer.zoho.com/docs/fHdacf

As a short comparison I've taken an example from the webiste of ICM Cruncher - http://www.pokercruncher.com/ipICMCruncher.html - and compared my new algorithm to the Malmuth-Harville:



My algorithm can also be adapted to cope with large player pools, something that is difficult to do with the Malmuth-Harville model. As an example I've assumed there are 40 players left in a satellite, the top 20 of which win a $100 ticket:

New algorithm for approximating ICM equities Quote
10-14-2011 , 10:19 AM
You don't make any mention of this, but recently there was a different proposal for a new ICM algorithm that also offers much improved computation time:

http://forumserver.twoplustwo.com/15...ments-1098489/

Am I right in assuming that your approach is different and independently developed? (I didn't really have a chance to do more than glance at your paper.) Anyhow, I'm sure you'll be interested in the other thread if you haven't already seen it.
New algorithm for approximating ICM equities Quote
10-14-2011 , 10:38 AM
Very interesting! This actually offers a way to perform more accurate equity calculations, not just being able to do them for larger fields.
New algorithm for approximating ICM equities Quote
10-14-2011 , 12:32 PM
Quote:
Originally Posted by egj
You don't make any mention of this, but recently there was a different proposal for a new ICM algorithm that also offers much improved computation time:

http://forumserver.twoplustwo.com/15...ments-1098489/

Am I right in assuming that your approach is different and independently developed? (I didn't really have a chance to do more than glance at your paper.) Anyhow, I'm sure you'll be interested in the other thread if you haven't already seen it.
I had come across this, and yes my approach is different.

That thread describes a clever method to estimate the traditional Malmuth-Harville equities for large player fields. My algorithm is based on a completely different model. That is, the equities produced are different (and generally more accurate) than those generated by Malmuth-Harville. It is an additional bonus that my algorithm can also be adapted to cope with large player fields.
New algorithm for approximating ICM equities Quote
10-14-2011 , 02:31 PM
In... digesting

better make a new app and include a NE heat map!

Last edited by MSchu18; 10-14-2011 at 02:38 PM.
New algorithm for approximating ICM equities Quote
10-14-2011 , 06:17 PM
Thanks for sharing your efforts. My question is how do you know what the true equity is to see which method gives a better estimate?

Also how can it possibly be true to obtain a proper equity in a large field of many tables when a particular table may include such stacks that alter the nature of the game. Eg table with many over the avg stack and you small vs another table that many have small stack like yours and only a single big stack exists. But both tables part of the same overall distribution of stacks. I mean how can the answer be unique?
New algorithm for approximating ICM equities Quote
10-15-2011 , 09:38 AM
Quote:
Originally Posted by masque de Z
Thanks for sharing your efforts. My question is how do you know what the true equity is to see which method gives a better estimate?
It's all explained in here - https://viewer.zoho.com/docs/fHdacf

Basically we can run discrete simulations for cases in which the chip stacks have a large common factor, and use Monte Carlo estimation to produce very accurate equities.

Quote:
Originally Posted by masque de Z
Also how can it possibly be true to obtain a proper equity in a large field of many tables when a particular table may include such stacks that alter the nature of the game. Eg table with many over the avg stack and you small vs another table that many have small stack like yours and only a single big stack exists. But both tables part of the same overall distribution of stacks. I mean how can the answer be unique?
There are many things the ICM model doesn't take into account - skill, position, how stack depth affects players' playing styles etc. Table draw structure is just another thing it ignores.

It's hard enough as it is to calculate the equities under very basic assumptions; it would be far too difficult to try to account for these other factors.
New algorithm for approximating ICM equities Quote
10-15-2011 , 11:40 AM
This is a clever change to the standard ICM that is faster to approximate and gives results closer to Markov absorption probabilities. You are right to be proud of that.

I did not care for some of the statements in the paper. "It is widely accepted that the theoretically correct values of the" poker probabilities are equal to the Markov absorption probabilities. That's not true. There are well known empirical deviations, too significant to be explained by random chance. I suppose you could stretch "theoretical" broadly enough to exclude all empirical evidence, but you then have to have a theory. What theory says the poker probabilities should be equal to the Markov absorption probabilities? It's a first-order approximation only. It is better than the linear assumption (all chips have equal value) but that's all you can say.

Another example is, "We know the probabilities. . .because the game is assumed to be fair." I distinguish between assuming and knowing. You then repeatedly claim your algorithm is more accurate when what you mean is it gives results closer to another model. You blew off masque de Z with the claim that an accurate solution is "far too difficult." Your paper doesn't say your method gives results closer to something easy to compute, it says it's more accurate.

None of this takes away from the value or cleverness of the result, I just don't like the way you describe it. I think you should note that the differences between your algorithm and the standard one are much smaller than the differences between either on and observation.

Consider, for example, a different model of a poker tournament. I assume that blinds are $100/$200 and double every round. Players get a draw from a uniform distribution and call if they get over 0.75, except for the small blind who calls with greater than 0.5 and the big blind who is always in. Highest number takes the pot. Obviously this is a very crude model of a poker tournament, but it's a lot closer to the game than random infinitesimal chip exchange. I compare the three results:

Stack Malmuth-Harvillle Tex_Perkins Me
2,000 115.5 108.4 115.8
3,000 164.7 160.9 147.6
4,000 206.9 207.8 208.6
5,000 241.8 246.0 244.0
6,000 271.0 277.0 283.9

I am closer to MH for the short stack, think the fourth stack has much less value than either other algorithm, we agree pretty much on the next two, and I give the top stack even more extra value over MH than you do. These are all consistent with empirical observation that top stacks are worth more than ICM suggests and low stacks are closer in value to each other.

Of course, small tweaks to the algorithm can make large differences to the simulation results. So I don't propose my quick model as better than either MH or yours. I just point out that small increases in accuracy relative to a theoretical standard have little practical value.

More important is that the decisions poker players face are not to flip coins for infinitesimal stakes. The two important decisions are whether to play marginal hands, giving up a lot of small losses for the chance of occasional big gains at roughly zero EV, and whether to continue in a close-call big hand, essentially to make a coin flip for all or a big part of your stack. The decisions far from zero EV are not relevant, and the decisions for small stakes don't matter much.

For these questions, simulations give much different results than theoretical computations. The results depend a lot on small factors like whether your stack is slightly more or less than the next round of blinds, or whether a bet is for 95% or 100% of your stack.
New algorithm for approximating ICM equities Quote
10-18-2011 , 08:46 AM
Quote:
Originally Posted by AaronBrown
This is a clever change to the standard ICM that is faster to approximate and gives results closer to Markov absorption probabilities. You are right to be proud of that.

I did not care for some of the statements in the paper. "It is widely accepted that the theoretically correct values of the" poker probabilities are equal to the Markov absorption probabilities. That's not true. There are well known empirical deviations, too significant to be explained by random chance. I suppose you could stretch "theoretical" broadly enough to exclude all empirical evidence, but you then have to have a theory. What theory says the poker probabilities should be equal to the Markov absorption probabilities? It's a first-order approximation only. It is better than the linear assumption (all chips have equal value) but that's all you can say.

Another example is, "We know the probabilities. . .because the game is assumed to be fair." I distinguish between assuming and knowing. You then repeatedly claim your algorithm is more accurate when what you mean is it gives results closer to another model. You blew off masque de Z with the claim that an accurate solution is "far too difficult." Your paper doesn't say your method gives results closer to something easy to compute, it says it's more accurate.

None of this takes away from the value or cleverness of the result, I just don't like the way you describe it. I think you should note that the differences between your algorithm and the standard one are much smaller than the differences between either on and observation.

Consider, for example, a different model of a poker tournament. I assume that blinds are $100/$200 and double every round. Players get a draw from a uniform distribution and call if they get over 0.75, except for the small blind who calls with greater than 0.5 and the big blind who is always in. Highest number takes the pot. Obviously this is a very crude model of a poker tournament, but it's a lot closer to the game than random infinitesimal chip exchange. I compare the three results:

Stack Malmuth-Harvillle Tex_Perkins Me
2,000 115.5 108.4 115.8
3,000 164.7 160.9 147.6
4,000 206.9 207.8 208.6
5,000 241.8 246.0 244.0
6,000 271.0 277.0 283.9

I am closer to MH for the short stack, think the fourth stack has much less value than either other algorithm, we agree pretty much on the next two, and I give the top stack even more extra value over MH than you do. These are all consistent with empirical observation that top stacks are worth more than ICM suggests and low stacks are closer in value to each other.

Of course, small tweaks to the algorithm can make large differences to the simulation results. So I don't propose my quick model as better than either MH or yours. I just point out that small increases in accuracy relative to a theoretical standard have little practical value.

More important is that the decisions poker players face are not to flip coins for infinitesimal stakes. The two important decisions are whether to play marginal hands, giving up a lot of small losses for the chance of occasional big gains at roughly zero EV, and whether to continue in a close-call big hand, essentially to make a coin flip for all or a big part of your stack. The decisions far from zero EV are not relevant, and the decisions for small stakes don't matter much.

For these questions, simulations give much different results than theoretical computations. The results depend a lot on small factors like whether your stack is slightly more or less than the next round of blinds, or whether a bet is for 95% or 100% of your stack.
Well I suppose if you disagree that the cts random walk model is widely accepted then by definition the statement must be wrong.

However, I would argue for the validity of the cts random walk model mainly because it is the only solution that is universally well-defined independently of the blinds/betting structure. Saying that it is not empirically accurate is somewhat a moot point because it is clear that other aspects (such as blind structure, skill, strategy effects due to ICM considerations etc) affect the equities in reality. As it's unclear exactly how this changes the real equities, then if we are to compare ICM approximations then I'd argue that using the cts random walk model as our standard is the only viable option.

To be clear, I am not saying that my algorithm reflects reality with more accuracy than the MH algorithm, just that it approximates the cts random walk solution with more accuracy. I suppose you're right that the distinction should be made clearer in the paper - if I get around to rewriting it I'll address the issue.

I'd also like to point out that the MH algorithm has little to no theoretical or empirical justification, so while it's hard to say which algorithm reflects reality with more accuracy, mine is at least more accurate according to the only measure we have. As such, I think my algorithm presents a clear improvement over the current practices.
New algorithm for approximating ICM equities Quote
03-12-2012 , 05:41 PM
Theres been some recent talk of this algorithm scattered around the STT strategy forum, but I though it made sense to consolidate some of my thoughts/findings about it here...

Having read Ben's paper and his recent article here:

http://www.pokericmcalculator.com/en...rts-icm-model/

I have to echo much of what's been said here... The algorithm seems very interesting and clever, but many of assumptions and conclusions are fairly eyebrow-raising...

All the claims of accuracy and "truth" of the equity results are only based on a different, mostly arbitrary, algorithm that also lacks any kind of observable connection to reality. I know this is technically stated in the article, but then saying about the sample hands: "Therefore, these simulations suggest (according to my intuition anyway) that the BR algorithm is not only more accurate with respect to the ICM model but also more reflective of real equities." is pretty far out there. If your intuition can tell you that a calling range of 78.9% is "more reflective of the real equities" than a 69.5% calling range, then your intuition is much much better than mine....

Also, when you say:

Quote:
This ignorance of realistic factors is a common criticism of the ICM, however (as far as I know) no viable alternative model has been implemented that does account for such complications.
There is my "Predictive Simulation" model that powers SnG Solver... It accounts for blinds/antes/position/future rounds ( full disclosure: SnG Solver is a commercial app, I wrote it, and therefore everything I say is biased )

Nevertheless, this algorithm is intriguing and I've implemented it and run it through my Nash STT simulator. I set it to play 6-handed (3 players playing a Roberts-ICM based Nash push/fold strategy, and 3 playing a Malmuth-Harville-CM based Nash push/fold strategy). After 200k simulated games, I got this:



So overall, this shows the Robert-ICM with -0.36% RoI vs the MH-ICM. So, at least as far as short-stacked, super-turbo style STTs, I cant say that the Roberts ICM is an improvement.

EDIT: BIG CAVEAT: plexiq is also running a sim like what I've done and has told me in PMs that his results differ quite a bit. In the past when we've compared sim data, we've been very very close so one of us may have some kind of systemic error. I will be auditing my results and will update if I find anything. I'm sure we'll see some more posts on this.
New algorithm for approximating ICM equities Quote
03-13-2012 , 09:41 PM
Does anyone know the computational complexity of actually computing the real ICM values? Is it known to be NP-Hard? If not, then it is conceivable that we can find an algorithm to solve ICM exactly. It "smells" NP-Hard to me, but what do I know,
New algorithm for approximating ICM equities Quote
03-13-2012 , 11:20 PM
Quote:
Originally Posted by sng_jason
EDIT: BIG CAVEAT: plexiq is also running a sim like what I've done and has told me in PMs that his results differ quite a bit.
Turns out that was just variance over the first 100k games. I also have Roberts model with negative ROI now.

Sample size: 1m
ICM ROI in sample: +0.18%
ICM ROI is >0 at p=0.017



The test setup was discussed in this thread.

The simulation data is available here:
http://www.holdemresources.net/simul...ts-a-ib.csv.gz

Format is: id, equity type of player 1..6, payout of player 1..6
New algorithm for approximating ICM equities Quote
03-22-2012 , 02:32 PM
Quote:
Originally Posted by sng_jason
All the claims of accuracy and "truth" of the equity results are only based on a different, mostly arbitrary, algorithm that also lacks any kind of observable connection to reality. I know this is technically stated in the article, but then saying about the sample hands: "Therefore, these simulations suggest (according to my intuition anyway) that the BR algorithm is not only more accurate with respect to the ICM model but also more reflective of real equities." is pretty far out there. If your intuition can tell you that a calling range of 78.9% is "more reflective of the real equities" than a 69.5% calling range, then your intuition is much much better than mine....
I don't mean that my intuition tells me precisely what % calling range is optimal. The point of that example is to show that given two situations which are identical in chipEV, the MH-algorithm advocates a significant tightening as a short stack as opposed to a big stack, whereas the BR-algorithm does not. This is what I'm referring to as "consistent with my intuition", although I admit my intuition is not based on a large SnG experience so others are free to disagree...

I'm honestly surprised that the sims show the MH-algorithm based players to have an edge over BR-algorithm based ones. Can you guys give more details how your sims work? ie how are you calculating the 'nash strategies'?
New algorithm for approximating ICM equities Quote
03-22-2012 , 05:26 PM
Some more details on the simulation setup:
I am using Ficitious Play on the simplified push/fold game with the additional restriction that no more than 3 players can enter the pot. (Once 3 players are all-in, all remaining players are forced to fold.) For the simulation, only linear ranges were allowed (ie, only 169 options for every range).

The linear Nash ranges obviously differ quite a bit from the Nash ranges in the "unrestricted" game, but i do not think this would create bias for one of the models. (I can re-run with unrestricted ranges if needed, but that will take much longer.)

Play of a single hand:
1) Calculate one fictitious play instance for every equity model. (Within each fictitious play instance, all players are optimizing in the same payoff model.)
2) For actual play, select the action of a player based on the FP results of his assigned equity model.

Game setup:
6 player 65/35, half the players using Harville / Roberts, seating is random.


Would be happy to re-run the simulation if we can work out a better test setup. (Preferably with linear ranges though.)
New algorithm for approximating ICM equities Quote

      
m