Open Side Menu Go to the Top
Register
COTM: A Crash Course in Game Theory COTM: A Crash Course in Game Theory

11-15-2014 , 03:59 AM
These last few posts have been great contributions, even the ones I don't agree with. I'm going to try to get to everything eventually, but for now let me address this:

Quote:
Originally Posted by Bluegrassplayer
I don't get why you don't think [GTO] exists multiway. It is certainly harder because there are more variables, but that doesn't make it impossible. No one has solved gto for hu yet because it's really really really hard, but you seem to think it exists.
Answered basically perfectly right here:

Quote:
Originally Posted by AsianNit
CallMeVernon said "probably doesn't exist", not "definitely doesn't exist".

I think it's been mathematically proven that there are certain types of two-player games that are guaranteed to have a solution and that poker fits the general requirements of those games. I think it's also been proven that certain multi-player games aren't guaranteed to have a solution. Chen/Ankenman seem to think that the presence of implicit alliances make it unlikely to be able to figure out any optimal strategy for multi-way poker.
The bolded is the answer. At least one Nash equilibrium is mathematically guaranteed to exist heads-up, even though we haven't explicitly solved for it yet. On the other hand, multiplayer games do not necessarily have Nash equilibria. (There is no theorem that they can't, but they don't have to.) There are at least 2 main reasons for this:

1) The presence of multiple players mean some players can cooperate with each other at the expense of others. (So AsianNit, this is not just what "Chen/Ankenman seem to think"--it's actually a totally standard well-known fact in game theory.)

2) This ties into why I wanted to show an example in my OP of how to compute a Nash equilibrium for a 2-player game, by applying the indifference principle to the other player's strategies. There is a theorem that that method always produces a Nash equilibrium in a 2-player game. However, when you have more than 2 players, you have a massive, massive problem, more than just "there are more variables" and also way more fundamental than just "all strategies are iteratively dominated" (like thetruewheel alluded to and doesn't actually make sense). The problem is that you can only apply the indifference principle to one player at a time. And it's very easy to imagine a 3-way game where applying the indifference principle to different players yields different results, and that kills any chance of a Nash equilibrium existing, because SOMEONE always has an incentive to change strategies.

I can't 100% prove that this is the case, but my opinion is that #2 should stop GTO poker from existing multiway, even if we are not considering the possibility of softplaying (as in #1).

Last edited by CallMeVernon; 11-15-2014 at 04:09 AM.
COTM: A Crash Course in Game Theory Quote
11-15-2014 , 03:54 PM
Quote:
Originally Posted by pokerodox
Learning the NLHE GTO plays (a continuous and complex range of actions) will teach us what is exploitable, and probably teach us a lot about how to do that exploiting. The bluff/value bet ratio is the perfect example. Let's say we observe V1 betting pot otr in a ratio of value to bluffs in a ratio of 6:1. He isn't bluffing enough, so we can always fold with our bluff catchers. Then we see V2 with a ratio of 3:1. He is bluffing too much, so we can always call with our bluff catchers. Right? No, wrong. The correct ratio where we are indifferent is 2:1. At that ratio, we can do either. If he ratio is 1.5:1, now we can always call with our bluff catchers (or with any ratio less than 2:1). But notice, for those of you that already knew the correct ratio, how you reacted when I said we can always call if V has a 3:1 ratio. I was very wrong, and it would be a costly mistake. Knowing GTO in that case was helpful.
To quote my favorite poker book out of context: "It sounds right, but actually it's not."

Knowing "GTO" in this case (I'll explain below why "GTO" is in quotes below) is actually not helpful at all. We could have come up with the correct answer without knowing any of that stuff. Do you see how? All we have to do is look at our pot odds.

If we are facing a pot-sized bet, we're getting 2:1 on the river. That means we need to be good one-third of the time. That is already enough information to know that if Villain's range is one-third bluffs, we break even on a call, and if it is any more, we should call, but if it is any less, we should fold. No "advanced" analysis is necessary. If Villain bluffs only 25% of the time, we do not care that he is deviating from "GTO"--we only care that our odds of winning the hand are longer than our pot odds, so we fold.

Quote:
Next I want to know how to apply GTO on multiple streets. I have read (I think parallelflux said it in duke's PG&C) that the proper ratio otf is 1:2, then 1:1 ott, then 2:1 otr, for the purpose of leading up to 2:1 otr. I don't know if that is right. But I would like to know, for two reasons: (1) the reason I just gave, about better understanding what is exploitable, and how to exploit it, and (2) the reason Garick gave above, being able to revert to GTO against a V you suspect is capable of and is exploiting H.
This is actually pretty easy to check! So easy, in fact, that I'll show you how to do it right now.

We're trying to check if 1:2, 1:1, 2:1 makes an opponent indifferent to calling down or folding, right? So let's imagine for simplicity that we have a flop pot of 1, and we're up against an opponent whose range consists of 4 hands we beat and 2 hands that beat us. He will triple barrel the 2 value combos and 1 of the bluffs; he will double barrel and give up on 1 of the bluffs; and he will pot the flop and give up on the last 2 bluffs. So on the flop he's betting 2 value/4 bluff; on the turn, 2 value/2 bluff; and on the river, 2 value/1 bluff.

Let's call the EV of a flop fold 0 (we win or lose nothing). If we call down, there are 4 possibilities:

1) 2 times out of 6, we pay off the value hand for 1+3+9 = 13, so our utility in this case is -13.

2) 1 time out of 6, we catch the bluff for 13 plus the original flop pot, so our utility is +14.

3) 1 time out of 6, we catch the bluff for 1+3 plus the original flop pot, and we win a check down on the river, so our utility is +5.

4) 2 times out of 6, we catch the bluff for 1 on the flop, plus the flop pot, and we win a check down, so our utility is +2.

The total EV is (-26/6) + (14/6) + (5/6) + (4/6) = -3/6 = -1/2.

So if our opponent utilizes ratios of 1:2, 1:1, 2:1, with pot-sized bets, we are better off folding on the flop than we are calling down. We are not indifferent.

If you want to check the EV of only calling some of the barrels and not all of them, you can do it a similar way. For example, I think the EV of calling the flop and folding the turn to a second barrel actually is 0, which is the same as folding flop.

Quote:
Next I want to know GTO with multiple players
As I said earlier, there's no reason to think this even exists. And it probably doesn't.

For that matter, by the way, I tend to think that these "GTO" exercises in employing the indifference principle don't really help too much. Or rather, they don't really show how to stop people from exploiting you. The reason is that they are all based on a horde of assumptions that are basically never valid, either for GTO play or in real life.

Every time I see one of these examples, they are preceded by assumptions like:

"We will bet pot on every street."
"Our opponent will only call or fold every time we bet."

This last assumption is ridiculous because if you think about it, it contradicts itself. Here's why. Let's look at the example I computed above, where we have a flop pot of 1. Against an opponent utilizing the 1:2, 1:1, 2:1 ratios, there is no calling strategy that is +EV. (You'd have to check the EV of calling 2 barrels to confirm this, but I'm sure it's negative.) We might as well fold the flop, for an EV of 0. But if we're folding the flop, our opponent always wins the pot, for an EV of +1. To emphasize: our opponent is +EV in this scenario. So it seems to me like we should not even be assuming that GTO play gets us into this spot to begin with!

Another reason I say it contradicts itself is that these spots always seem to favor the aggressor. In none of these spots is the person doing the calling ever +EV. But the aggressor is always +EV. Well, if the aggressor is always +EV, doesn't that seem to imply that we should question the assumption that two players in a Nash equilibrium would ever call any bet?

So, here are a few questions about GTO (heads-up) poker that as far as I know, we don't have the answer to. They are so far off the scenarios that we usually consider that this is why I think the whole thing is an exercise in futility.

1. If two players are playing heads-up NL and are in a Nash equilibrium, does the button/SB ever do anything other than jam preflop?

2. If two players are playing heads-up NL and are in a Nash equilibrium, does anyone ever call preflop with money behind, or are all flops seen with both players already all-in?

I find it very weird that people are so keen to analyze flop/turn/river from a "GTO" perspective when we cannot even say whether true GTO play ever leads to seeing a flop with money behind, or even whether there is such a thing as GTO in multiway games (this last one is the one that bothers me the most actually).
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 06:07 AM
Quote:
Originally Posted by CallMeVernon
At least one Nash equilibrium is mathematically guaranteed to exist heads-up, even though we haven't explicitly solved for it yet. On the other hand, multiplayer games do not necessarily have Nash equilibria. (There is no theorem that they can't, but they don't have to.)
Could you elaborate on this please? Iirc the Nash existence proofs work for any finite number of players, so this does not seem right.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 02:01 PM
Quote:
Originally Posted by plexiq
Could you elaborate on this please? Iirc the Nash existence proofs work for any finite number of players, so this does not seem right.
I may have misspoken here because of the difference between a Nash equilibrium and a GTO strategy. (In 2-player games they correspond, but I think in multiplayer games they don't.) But things do get messed up with more than 2 players. There is a classic game called Divide the Dollar which provides an example of what I'm talking about.

First let's look at the 2-player rules. You and I are being given a dollar, but we have to agree how it will be split between us. So we each write down an ordered pair where the first coordinate is my payout, and the second coordinate is your payout. If they match, we get what we wrote down; but if they don't match, we each get 0.

For example, if I write (.75, .25), and you write (.75, .25), I'll get 75 cents and you'll get 25 cents.

If you write (.50, .50) and I write (.50, .50), we both get 50 cents.

If you write (.25, .75) and I write (.75, .25), we both get 0.

This game obviously has a Nash equilibrium. But at first glance it's not obvious what it is. You might think that (.50, .50) is the obvious Nash equilibrium. And you'd be partially right--that is a Nash equilibrium. In fact, any strategy where we match is a Nash equilibrium. If I write (.99, .01), your optimal strategy is to take your penny, because if you don't you get nothing. Similarly if you write (.01, .99), my optimal strategy is to take my penny, because if I don't I get nothing.

In other words, if theoretically there was one time where we both wrote (.99, .01), that is a Nash equilibrium because neither of us has an incentive to deviate from that. Now, by the way, if we played this game repeatedly, there's basically no chance we'd arrive at that equilibrium--and also, there's no guarantee that we'd ever arrive at any equilibrium since both of us might greedily stick with different equilibrium strategies hoping the other one conforms to the one we want. Are you reading this, luvinurmoney?

Now let's make it even worse by expanding to 3 players. A, B, and C are playing Divide Three Dollars (just so the amount is divisible by 3). Each one writes down an ordered triple: (A's payout, B's payout, C's payout). This time, though, let's change the rules so that instead of everyone having to match, only two players' entries have to match. And when they do, all 3 players get the payouts of the majority vote.

Now let's suppose that the first time they played this game, all 3 players wrote (1, 1, 1). So they're getting paid out 1 dollar each. Does any player have an incentive to change strategies?

The answer is actually yes! And here's why. Suppose we are playing this game repeatedly, and let's say Player A unilaterally decides to start writing (1.50, 1.50, 0). If the other two players don't change, Player A still gets paid his dollar. So basically player A is freerolling and giving B an incentive to freeze C out and make more for the two of them. Now Player B has an incentive to copy A's strategy because it makes him an extra 50 cents.

So is (1.50, 1.50, 0) an equilibrium? Obviously it isn't, because now C has an incentive to change. He can start writing, for example, (1.75, 0, 1.25), giving A an incentive to change again and freeze B out. (And he's freerolling because he's already getting nothing.) So now A has an incentive to change.

But now B, getting frozen out, has an incentive to change as well--if he writes (0, 1.50, 1.50), he gives C an incentive to change strategies and freeze A out.

This goes on forever. At no point in this game does no player have an incentive to change strategies.

This is one of the examples I had in mind when saying that multi-way Nash equilibria are not guaranteed to exist.

Now this is where I have to qualify that my expertise as a mathematician is not in game theory (which I warned everyone about in the OP). I have read that there actually is a Nash equilibrium for this game. For example, I have seen it claimed that (1, 1, 1) is one (and by extension any payout that gives everyone more than 0 should also be one). I think that probably the reason this is the case is because if all 3 players ever agreed on a payout, no one player could unilaterally change and be guaranteed that he's the only one changing, so by changing he risks a 0 payout. Maybe. But this doesn't totally make sense to me for a couple of reasons. One is that I thought the definition of a Nash equilibrium was supposed to be that if one player knew what everyone else was doing he'd have no incentive not to play his own equilibrium strategy. And that's not the case in this game. Another is that there is no guarantee that we start out with all players agreeing. I just assumed that to be true, but really there seems to be no reason it should be.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 02:17 PM
Quote:
Originally Posted by SABR42
I don't see how this OP relates to poker in any way.
IF this is nothing more than a GTO containment thread that eliminates the term from 10 HH's a day, then i'd say it's a success.

If you're right and it has nothing to do with poker, then hashing that fact out has a lot of value for those who are misled about GTO. Namely, that it is a golden ticket to wealth.

You guys lost me about 30 posts ago, but it's still pretty fun to think about.

Fun? We're all pretty hopeless.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 02:27 PM
Quote:
Originally Posted by spikeraw22
You guys lost me about 30 posts ago, but it's still pretty fun to think about.
+1

Quote:
Fun? We're all pretty hopeless.
+111
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 02:47 PM
Quote:
Originally Posted by spikeraw22
IF this is nothing more than a GTO containment thread that eliminates the term from 10 HH's a day, then i'd say it's a success.
That was actually what I had in mind when I offered to make this thread. The term is misused on this forum, often ridiculously so, and sometimes to the point that it detracts from other threads.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 02:55 PM
Sorry, it was not clear to me that you were strictly talking about the repeated version of the game. (That seems like a wildly non-standard thing to do in the context of GTO/NE discussions.)

So you do agree that a Nash Equilibrium exists for the non-repeated version of 3+ player poker?
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 03:04 PM
Quote:
Originally Posted by plexiq
Sorry, it was not clear to me that you were strictly talking about the repeated version of the game. (That seems like a wildly non-standard thing to do in the context of GTO/NE discussions.)
Why is this a non-standard thing to do? It seems to me like a natural extension of talking about games played once. If a game has a Nash equilibrium, it should not change when the game is played multiple times. Otherwise it cannot reasonably be called an equilibrium.

EDIT: Also, the non-repeated version of poker isn't worth discussing because it's actually not a fair game. Imagine you were playing heads-up and you were only going to play 1 hand where your opponent had the button that hand. Would you play? (I suppose it remains fair if you deal for the button after agreeing to play, but I think you see my point.)

Last edited by CallMeVernon; 11-17-2014 at 03:10 PM.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 03:22 PM
"Non-standard" in the sense that the vast majority of poker related research and GTO/NE discussions on here deal with the non-repeated game. We are adding huge complexity to a game that we can't even solve in the non-repeated version.

Definition of the Nash Equilibrium only requires that if all but one players lock in their part of the NE strategy, the remaining player can not unilaterally improve his EV by playing anything different.

This is clearly the case for "Divide Three Dollars" game at e.g. (1,1,1). As far as i can tell, one player being able to create incentives for others to leave the NE in the repeated game does NOT violate the NE requirements.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 03:23 PM
Quote:
Originally Posted by CallMeVernon
EDIT: Also, the non-repeated version of poker isn't worth discussing because it's actually not a fair game. Imagine you were playing heads-up and you were only going to play 1 hand where your opponent had the button that hand.
This scenario sounds a bit like Bovada's Zone Poker (their version of Zoom, with anonymous tables). And I've heard people say that without any prior or consequential dynamics/image, you want to focus on making sure your default lines are as strong as possible for these games since our knowledge of villain's tendencies will usually be a blank slate.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 03:39 PM
Quote:
Originally Posted by plexiq
"Non-standard" in the sense that the vast majority of poker related research and GTO/NE discussions on here deal with the non-repeated game. We are adding huge complexity to a game that we can't even solve in the non-repeated version.
Part of the reason why I felt this thread was necessary is that the discussions on here are often not reasonable from a game theory perspective. Considering the repeated vs. non-repeated game shouldn't actually add complexity to the solution.

Quote:
Definition of the Nash Equilibrium only requires that if all but one players lock in their part of the NE strategy, the remaining player can not unilaterally improve his EV by playing anything different.

This is clearly the case for "Divide Three Dollars" game at e.g. (1,1,1) [This is true for any outcome the way you've defined a Nash equilibrium]. As far as i can tell, one player being able to create incentives for others to leave the NE in the repeated game does NOT violate the NE requirements.
This is a good point and is probably why the multi-player DTD game is considered as having a Nash equilibrium. However, the presence of the implicit negotiation I described is still a huge wrinkle when compared to a 2-player game.

For example, there is a widely held perception that if you are in a Nash equilibrium, all you have to do is continue playing your Nash equilibrium strategy and your EV can't go down regardless of what the other players do. This game is a clear counterexample--you could play (1, 1, 1) repeatedly and if you did that your EV would eventually drop to 0 if the other players were playing optimally. In a 2-player game nothing like this ever happens.

Last edited by CallMeVernon; 11-17-2014 at 03:57 PM.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 04:24 PM
Quote:
Originally Posted by CallMeVernon
Part of the reason why I felt this thread was necessary is that the discussions on here are often not reasonable from a game theory perspective. Considering the repeated vs. non-repeated game shouldn't actually add complexity to the solution.
Well, the following is just my naive understanding of repeated games:
Conceptually, i think the repeated version means that the history of all previously played rounds becomes part of the current round's game state/information.

Obviously, allowing players to make their actions dependent on the history of previous rounds drastically increases the complexity / strategy space. (Compare # of distinct pure strategies in prisoners dilemma in the repeated vs non-repeated version.)

Quote:
For example, there is a widely held perception that if you are in a Nash equilibrium, all you have to do is continue playing your Nash equilibrium strategy and your EV can't go down regardless of what the other players do. This game is a clear counterexample--you could play (1, 1, 1) repeatedly and if you did that your EV would eventually drop to 0 if the other players were playing optimally. In a 2-player game nothing like this ever happens.
Idk how common this perception is. At least in the tournament case i believe it is widely understood that even a single player deviating can cause some other players following NE strategies to lose EV.

I completely agree that the repeated version can be interesting. Just thought that the distinction of repeated vs non-repeated was not very clear itt and that this may actually confuse some users even more.

Fwiw, I actually posted a somewhat related toy game a while back where "good" repeated play would likely differ from the NE:
http://forumserver.twoplustwo.com/15...-game-1394693/
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 04:46 PM
Quote:
Originally Posted by plexiq
Idk how common this perception is.
In *this* forum, it often appears to be very common. This thread isn't in Poker Theory for a reason.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 05:09 PM
Hehe, fair enough To be perfectly honest, i didn't realize what sub-forum this was posted in until just now.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 09:40 PM
Someone just sent you a link, eh?

Nice to have someone with some Game Theory background ITT, if it does take this thread even farther away from using the theory in LLSNL. It does a lot of good explaining to folks why it's not Super-poker after all.
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 10:45 PM
Probably just looking at "New Posts".
COTM: A Crash Course in Game Theory Quote
11-17-2014 , 11:39 PM
Actually just stumbled upon the thread because #68/#71 link to my site.

I skimmed over most of the thread now, and i think the following wasn't really mentioned yet:
The rock/paper/scissors game is commonly used as an example, but it is somewhat misleading. The Nash solution in RPS not only guarantees at least 0EV, it actually has exactly 0EV against every possible strategy. This is NOT a general property of a NE.

In the poker related games we can actually solve, there seems to be quite little room for deviations from Nash without giving up EV.

For example at most stack sizes of the push/fold heads-up game there is only a single hand per range that is indifferent and is played with a mixed strategy in the NE. This means in order to not lose to a Nash player in that game you can only freely choose the strategy of a single hand, and need to play the remaining 168/169 hands the same as Nash.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 03:38 AM
Quote:
Originally Posted by plexiq
The Nash solution in RPS not only guarantees at least 0EV, it actually has exactly 0EV against every possible strategy. This is NOT a general property of a NE.
That is why I started the thread by giving examples of one game that has that property, and others where you cannot deviate from the NE without losing EV (like the Prisoner's Dilemma). The point I'm trying to make in this thread, to other readers of this forum, is exactly what I think you're also saying: that very often, people's uninformed intuition about the what the general properties of a Nash equilibrium should be are wrong.

EDIT: Also, if you only skimmed the thread, you may have missed post #27 where I pre-emptively addressed what you're talking about.

Last edited by CallMeVernon; 11-18-2014 at 03:46 AM.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 05:45 AM
https://www.youtube.com/watch?v=iMlc...ature=youtu.be

This might help some of you guys. The guy that made the vid and created GTORB is Alex Sutherland who has quite the background in gto.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 11:00 AM
Quote:
Originally Posted by CallMeVernon
That is why I started the thread by giving examples of one game that has that property, and others where you cannot deviate from the NE without losing EV (like the Prisoner's Dilemma). The point I'm trying to make in this thread, to other readers of this forum, is exactly what I think you're also saying: that very often, people's uninformed intuition about the what the general properties of a Nash equilibrium should be are wrong.

EDIT: Also, if you only skimmed the thread, you may have missed post #27 where I pre-emptively addressed what you're talking about.
#27 was part of the reason i posted this. I thought that series of posts gave the impression that it would be somewhat feasible to break even against GTO by playing a solid "regular" game. For the poker related games we can actually solve, it looks like this is very far from the truth.

Quote:
Any strategy that is included in a GTO strategy is not dominated, and any strategy that is not included in a GTO strategy is iteratively dominated.
Quote:
It ought to be clear from the previous section that if both players play RPS using a GTO strategy, their EV will be 0. However, one extremely common misperception about GTO strategy is that if you play GTO, and your opponent doesn’t, you will now start to show a positive EV. The calculations in the previous section disprove that notion—if you stick to a GTO strategy, your EV will stay at 0 no matter what your opponent does**. Your opponent could switch to a strategy that is 100% R—about as different from GTO as you could get—and if you continued playing GTO, your EV against that strategy would still be 0, just the same as if he were playing GTO.

(**Note that this is not always true for every game and every strategy. However, it is always true for every zero-sum fair game if your opponent sticks to strategies that are not dominated. In RPS, no strategy is dominated. But imagine if we added a 4th option to the game; call it Pebble. Pebble beats scissors and loses to paper, but it also loses to rock. Pebble is dominated by rock. Now a GTO strategy would be 1/3 rock, 1/3 paper, 1/3 scissors, and no Pebble. Throwing rock, paper, or scissors would be 0EV against the GTO strategy, but if you played GTO and your opponent played any strategy that included Pebble, you would now be +EV.)
Bolded parts are not correct, see game below.

Code:
  R/C	  D	  E	  F
A	 2,-2	-1, 1	-2, 2
B	 1,-1	 0, 0	 1,-1
C	-2, 2	-1, 1	 2,-2
The NE is at (B,E) and any deviation will result in a loss for the deviating player. None of the strategies is dominated.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 02:17 PM
In the game you just posted, A/C/D/F function as iteratively dominated (though not strictly dominated) strategies, in the sense that if you try to create a mixed strategy of A/B/C that makes your opponent indifferent to playing D, E, or F, you get, not that you should play 100% B, but a contradiction in the equations. (What is interesting, though, is that you can't use a reduction of the game to prove that they are iteratively dominated; you have to use the fact that the indifference principle creates a contradiction! I had never seen that before.)

You are right, though, that I have been a little loose about my use of "dominated" versus "iteratively dominated" in this thread, and that technically the line in the OP is not 100% correct.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 02:37 PM
Could you please define what exactly you mean by "iteratively dominated"? (Sorry in case you already did somewhere earlier in this thread, i could not find it though.)

I assumed you mean strategies that can be removed by iterated elimination of dominated strategies, but strategies A/C/D/F do not fit that definition.
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 03:25 PM
You know, to be honest, until a couple of posts ago, I would have agreed with your definition, but now I wonder if that's the "real" definition or if there's a better one. I certainly want A/C/D/F to be classified as iteratively dominated strategies in that game, since that's sort of how they function. But clearly we agree that they are not removed by any game reduction.

Maybe this will clarify: how do you prove that (B,E) is the unique Nash equilibrium of that game? (I am asking to see if it is different from my proof.)
COTM: A Crash Course in Game Theory Quote
11-18-2014 , 03:55 PM
I'm honestly not exactly sure what you want to express with the term. Basically you just mean every strategy that is not played with a non-zero frequency in any NE? In that case I'd just leave the term dominance completely out of this.

Re: Iterated Dominance, check the link at 3.1/3.2:
http://web.stanford.edu/~jdlevin/Eco...20Concepts.pdf

Quote:
Maybe this will clarify: how do you prove that (B,E) is the unique Nash equilibrium of that game? (I am asking to see if it is different from my proof.)
We know one NE at (B, E) has payoffs (0, 0) and the game is zero sum. It follows directly that any GTO strategy of the column player needs to have a payoff of 0 against B, and any GTO strategy of the row player needs to have a payoff of 0 against E.

Since A/C have a negative payoff against E and D/F have a negative payoff against B and no strategy has a positive payoff in these cases, we can't achieve the required 0-payoff if they are played with a non-zero frequency.
COTM: A Crash Course in Game Theory Quote

      
m