Quote:
You are right I have my doubt on this, but I understand also when you say "there is no way around it rather than solve from preflop starting ranges 100%" which might not be feasible
It's feasible for HU situations and in fact our edge version does that although it solves it on a subset of flops as trees with all 1755 are just too big (even using 40-100 flop subsets you are looking for at least 64GB RAM for practical cases and 128GB or more for big trees with many options).
Quote:
I still have some doubts on this, sorry to be pedant, are you sure about it?
It's ok to be pedantic about definitions
Quote:
I thought equilibrium is just ensuring to minimize opponent expectancy rather than maximise hero's one
We are maximizing our expectancy assuming the opponent is going to exploit us and the opponent is maximizing their expectancy assuming we are going to exploit them. In a perfect equilibrium our strategy is also MES vs their strategy and their strategy is MES vs ours. It has to be the case because if there was some other strategy for us to choose which higher EV against his strategy it wouldn't be equilibrium by definition (the definition is that no side can improve changing their strategy alone).
Quote:
your statement is only true if opponent plays nash himself, otherwise isn't rather common to have strategies with higher expectancy outside equilibrium?
My statement reads: "assuming we don't know what opponent is going to do". This means they are free to play max exploit vs us. Consider this process:
We try every possible strategy s1, s2, .... , sn (there are bazilion of them but don't worry, just assume we can try every single one). We get the opponent playing max exploit every time and we see what our EV is. We choose the one which guarantees us the highest EV. Our opponent does the same and the resulting pair of strategies is equilibrium.
It's easy to prove as well:
1)if we have chosen a different strategy that would mean that either the opponent can improve (by choosing max exploit vs us) or we could improve by choosing a different strategy (to guarantee a higher payoff)
2)same reasoning goes for the opponent
Again, it's important to remember what the definition of equilibrium is. It then becomes obvious that both sides must play max exploit vs each other in that state.
Quote:
If you agree with it shouldn't your "maximally exploitive strategy" MES rather be called MINIMUM exploitable strategy?
Max exploitive strategy wins the most against a specific fixed strategy. The name is just fine. It's true that in equilibrium both strategies are MES'es vs each other although they are one of many possible MES'es.
It would make sense to call an equilibrium strategy a minimum exploit so to speak but there is already a name for it (equilibrium or optimal strategy). The term minimum exploitive strategy was recently claimed by one of our competitors to mean something else as well.
Quote:
As an approximation, do you think we are better off to give opponent a wider range? (ie. Can we calculate if it generate a greater distance from Nash to exclude an hand that might actually be in the range or to include it when it isn't?)
It's hard to say if it's better to err on the tighter or looser side. I guess it depends on how actual games are. It seems people are way too passive comparing to the solver so my wild guess is that we can play more hands than equilibrium suggests.