Open Side Menu Go to the Top
Register
Extracting irl morality solutions from solving games... Extracting irl morality solutions from solving games...

05-29-2013 , 05:21 PM
I think the idea OP is aiming at is discussed in this Wiki article:

Superrationality - Wiki
From Link:
Quote:
The concept of superrationality (or renormalized rationality) was coined by Douglas Hofstadter, in his article series and book Metamagical Themas.[1] Superrationality represents an alternative type of rational decision making different from the widely-accepted game-theoretic one, since a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a game-theoretically rational player will defect. Superrationality is not a mainstream model within game theory.

I tried to promote this idea a while back in SMP with a situation I believe I called, "The Parasite Problem". It's really a souped up version of the PD but with a community of players I thought gave it a different feel.

The setup is, you are 1 of 100 players. The most important fact here is that you are all equally optimally most rational as possible And you all know that fact And you all know that you all know. You must all choose independently and in secret from each other whether you want to be a Contributor or a Parasite. There will be a community treasury with funds provided by an outside source according to the rule, $1000 added for each Contributor and $2000 subtracted for each Parasite. After everyone has choosen, the treasury is divided between the 100 players according to 1 share to each Contributor and 2 shares to each Parasite. The treasury can go negative in which case players must pay the outside source according to their number of shares as well.

The question then is, what is your optimal most rational choice knowing that all the other players will also be coming to the optimal most rational choice?

The Nash Equilibrium choice for an individual is probabilistic, where the individual flips a weighted coin so as to Contribute with 2/3 chance and Parasite with 1/3 chance. If everyone does this then the game is 0 EV. The expected treasury will be 0.

But people can also recognize that if everyone Contributes then everyone makes $1000. Surely that's a more rational choice under the most important fact of the setup, that everyone is equally as rational as possible and everyone knows that everyone knows that fact. Those who recognize this and choose to contribute - which should be everyone under the assumption - are being superrational as described in the Wiki article. In a group of 100 who are free to be superrational knowing the important fact, they all make $1000. In another group where everyone is "Nash" rational their EV is to all go home with nothing.

You can argue that in the superrational case, someone might go a step further and think that everyone else will be superrational so he might as well decide to be a Parasite and go home with close to $2000. That thought would likely cross his mind, but then being logical he would realize that if he did so it would contradict the most important fact about the setup. So he doesn't do it.

Like I said, this is really a souped of version of the PD and the same argument would hold in PD for cooperating under the "most important fact" assumption. However, I like the feel that the community gives it. Plus I thought it up myself.


PairTheBoard

Last edited by PairTheBoard; 05-29-2013 at 05:29 PM.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 05:28 PM
Quote:
Originally Posted by nek777

I would say that more than just making the math more complicated, imperfect information really makes the math somewhat irrelevant.

That is, we would seem to need perfect information, some perfect theory to derive all the information needed for the perfect mathematics.

Without some way of deriving the information we need, it would seem that we are just making a calculated intuitive guess.

We can definitely extract solutions to irl problems with incomplete information. Its incomplete but for example if both parties have the same limited access, then the correct answer isn't necessarily possible to achieve but the best possible answer given the info becomes the edge.

but when we use this to model big things we have to remember the assumptions that come with it, namely survival of the fittest, which may not be correct for this plane.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 05:29 PM
Quote:
Originally Posted by newguy1234
What you have to show to disprove this is that there can be violence, torture, suppression, war etc. when all parties act for the group (therefore no defectors).
no, would just need to show that some members profit (whether or not at the expense of others) from deviation of peace, which is much more obvious than your claim to the contrary.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 05:43 PM
Quote:
Originally Posted by PairTheBoard
The treasury can go negative in which case players must pay the outside source according to their number of shares as well.
Its really funny, i swear I am smart in some way but we we introduce 'outside' intervention my brain shuts down and I can never make sense to it, doesn't matter if its hypothetical, government stimulated, or some business merger or whatever.

Quote:
The question then is, what is your optimal most rational choice knowing that all the other players will also be coming to the optimal most rational choice?
This is close and worth discussing in this same thread, but there are things to clarify about this question. My main argument is we cannot compare it to real life because in real life the people could change things so they share the treasury (I'm sure you agree it's not a 100% correlation)

Quote:
The Nash Equilibrium choice for an individual is probabilistic, where the individual flips a weighted coin so as to Contribute with 2/3 chance and Parasite with 1/3 chance. If everyone does this then the game is 0 EV. The expected treasury will be 0.
Yes we expect no gain, but irl since the treasury keeps its money we might consider that +ev, compared to war and suppression created by greed. I'm not sure I'm suggesting communism either though.

Quote:
But people can also recognize that if everyone Contributes then everyone makes $1000. Surely that's a more rational choice under the most important fact of the setup, that everyone is equally as rational as possible and everyone knows that everyone knows that fact. Those who recognize this and choose to contribute - which should be everyone under the assumption - are being superrational as described in the Wiki article. In a group of 100 who are free to be superrational knowing the important fact, they all make $1000. In another group where everyone is "Nash" rational their EV is to all go home with nothing.
I also think supperationallity should include mind reading much like everyone colluding to bring the best result, but I think it should also include everyone sharing the ev.
Quote:
You can argue that in the superrational case, someone might go a step further and think that everyone else will be superrational so he might as well decide to be a Parasite and go home with close to $2000. That thought would likely cross his mind, but then being logical he would realize that if he did so it would contradict the most important fact about the setup. So he doesn't do it.
So this is slightly different because under my 'strategy' one sees everyone as the whole and that would never cross their mind.

Quote:
Like I said, this is really a souped of version of the PD and the same argument would hold in PD for cooperating under the "most important fact" assumption. However, I like the feel that the community gives it. Plus I thought it up myself.
I think its a great think to add to all this. Link to thread?
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 05:45 PM
Quote:
Originally Posted by RollWave
no, would just need to show that some members profit (whether or not at the expense of others) from deviation of peace, which is much more obvious than your claim to the contrary.
Yes and I'm claiming its impossible to prove without first assuming the players are not on the same team.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 05:59 PM
Quote:
Originally Posted by newguy1234
We can definitely extract solutions to irl problems with incomplete information. Its incomplete but for example if both parties have the same limited access, then the correct answer isn't necessarily possible to achieve but the best possible answer given the info becomes the edge.

but when we use this to model big things we have to remember the assumptions that come with it, namely survival of the fittest, which may not be correct for this plane.
Not just incomplete information, but imperfect information.

As you say, we should beaware of the assumptions - by imperfect information I mean that the underlying assumptions are possibly wrong or at least not the best.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 06:04 PM
Quote:
Originally Posted by nek777
Not just incomplete information, but imperfect information.

As you say, we should beaware of the assumptions - by imperfect information I mean that the underlying assumptions are possibly wrong or at least not the best.
Ah right, yes agreed, I wondered why I felt weird explaining incomplete info.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 06:55 PM
Quote:
Originally Posted by newguy1234
<snip>
Living in a peaceful world is far greater than maximizing ones one poker winnings. I'm not telling you what to do, but im suggesting that its obvious fact.
You're confusing two different games here. Maximizing your poker winnings doesn't effect world peace.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 06:57 PM
Quote:
Originally Posted by PairTheBoard
~
are you a one boxer in newcomb's problem?
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:16 PM
Quote:
Originally Posted by nek777
Well, more than likely, I am already over extended in my knowledge and abilities, but I am going soldier on....

In physics, would it be necessary to at least have the "part we are interested" have some predictive ability with out idealizing away the extraneous?

I do see some utility in these "games", but it seems there has to be some aspect that can function effectively "irl".


I would say that more than just making the math more complicated, imperfect information really makes the math somewhat irrelevant.

That is, we would seem to need perfect information, some perfect theory to derive all the information needed for the perfect mathematics.

Without some way of deriving the information we need, it would seem that we are just making a calculated intuitive guess.


Perhaps I am reading too much into people's statements, but I get a feeling that some may have some absolutist stance that these boil down to some math problem that is solvable - not that I completely disavow the approach. I think its weighted too heavily at times.
I don't really know what you are trying to say anymore. Why is it necessary to have perfect information to use game-theoretical reasoning in our decision-making? Incidentally, game theory has real-life applications that go well beyond just rational decision-making as well--such as modelling the strategies of evolving populations in biology.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:29 PM
Quote:
Originally Posted by Original Position
You're confusing two different games here. Maximizing your poker winnings doesn't effect world peace.
I'm not confused.

In a world in which dog eat dog is the correct way to go, maximizing your poker winnings doesn't effect world peace.

What people here like to suggest is since this world is dog eat dog, we can use optimal poker strategy to make optimal decisions irl.

But dog eat dog is an assumption.

If we don't start with that assumption and make room for 'everyone is on the same team', then playing poker to maximize ones own profits is not optimal.

Optimal would be to collude together in order to bring poker theory to a level in which no one could gain.

The reason this is optimal because that knowledge (or strategy) that would be created is a living embodiment of how to bring the world to peace.

Where peace is defined as everyone serving the group with no one defecting.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:40 PM
Quote:
Originally Posted by PairTheBoard
The most important fact here is that you are all equally optimally most rational as possible And you all know that fact And you all know that you all know.
The bolded seems to be implying something like a "Level infinity" reasoning, but it functions like cooperation. So doesn't that contradict the assumption of independent choosing?
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:45 PM
Quote:
Originally Posted by PairTheBoard
I think the idea OP is aiming at is discussed in this Wiki article:

Superrationality - Wiki
From Link:

I tried to promote this idea a while back in SMP with a situation I believe I called, "The Parasite Problem". It's really a souped up version of the PD but with a community of players I thought gave it a different feel.
<snip>
Eh, this seems to avoid the really difficult part of the prisoner's dilemma. The advantage of mutual cooperation in the Prisoner's Dilemma is obvious to everyone. The problem is developing a plausible model of rationality where doing so is correct. Here you try to work backwards--you say that cooperating is an instance of "superrationality." Fine. But what is "superrationality"? What principles of rationality are you adding (or subtracting) from the standard account in game theory so that cooperation in a single instance of the Parasite Problem (or Prisoner's Dilemma) is correct?
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:46 PM
Quote:
Originally Posted by newguy1234
I'm not confused.

In a world in which dog eat dog is the correct way to go, maximizing your poker winnings doesn't effect world peace.

What people here like to suggest is since this world is dog eat dog, we can use optimal poker strategy to make optimal decisions irl.

But dog eat dog is an assumption.

If we don't start with that assumption and make room for 'everyone is on the same team', then playing poker to maximize ones own profits is not optimal.

Optimal would be to collude together in order to bring poker theory to a level in which no one could gain.

The reason this is optimal because that knowledge (or strategy) that would be created is a living embodiment of how to bring the world to peace.

Where peace is defined as everyone serving the group with no one defecting.
Okay.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 07:59 PM
Quote:
Originally Posted by Vael
are you a one boxer in newcomb's problem?
Yes. Yes I am.


PairTheBoard
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 08:52 PM
Originally Posted by PairTheBoard
The most important fact here is that you are all equally optimally most rational as possible And you all know that fact And you all know that you all know.
-----------------



Quote:
Originally Posted by Aaron W.
The bolded seems to be implying something like a "Level infinity" reasoning, but it functions like cooperation. So doesn't that contradict the assumption of independent choosing?
You can probably get a clearer picture of the idea of superrationality from reading the Wiki I linked to. My presentation may not be so tight. I thought about the "Level infinity" for a moment before writing the above. My thinking was that as one of the 100 players, it's important to my descision to know that we are all equally capable to come to the most rational decision and will act accordingly. And it's also important for me to know that we are all coming to that rational decision based on the same information. So not only do I need to know that they know we're all equally rational. I need to know that, like I know that fact, they know that everybody else knows that same fact - that we're all equally rational. As far as I can see, that puts us all on an equal information footing. I don't see the need to go to further levels.

Maybe an easier way to set it up would be if we are all in the room together where it's announced that according to perfectly reliable pretesting it has been determined that we are all perfectly and equally rational. We all hear the announcement and all see that everyone has heard it. With the stipulation that it's true and we all believe it and somehow know that we all believe it. Anyway, that was the idea of the way I described the setup.

I also thought for a brief moment if the word "independent" might not be precisely best. My thinking was to briefly describe the fact that there would be no communication between players and nobody would see what other players chose.


PairTheBoard
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 08:55 PM
Quote:
Originally Posted by Original Position
Okay.
Do you disagree then, with the suggestion, that if the entire world worked together as a team towards gain, that each individual would be in a better position than the top individual in a world that views each other as opponents?

I am suggesting every person in the former would have a better life than the best life in the latter.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 09:04 PM
Quote:
Originally Posted by PairTheBoard
My thinking was that as one of the 100 players, it's important to my descision to know that we are all equally capable to come to the most rational decision and will act accordingly. And it's also important for me to know that we are all coming to that rational decision based on the same information. So not only do I need to know that they know we're all equally rational. I need to know that, like I know that fact, they know that everybody else knows that same fact - that we're all equally rational. As far as I can see, that puts us all on an equal information footing. I don't see the need to go to further levels.
All of this same-ness of rationality fits in the usual accounting of game theory. That is, in normal game theory, you're ALSO assuming that everyone is coming to their conclusions in the same manner on the basis of the same information.

The wikipage isn't any clearer on this point.

Quote:
Originally Posted by wiki
Since the superrational player knows that the other superrational player will do the same thing, whatever that might be, there are only two choices for two superrational players.
Again, this really looks like their strategies are not independent.

Furthermore, it's unclear what it would mean for a non-symmetric situation. That is, in the case that the players have different payouts, what does it mean to say that the other superrational player will do "the same thing" if their games are different in some sense?

I think it still looks like you're assuming that everyone will cooperate, but you're just hiding that assumption behind some other words that don't mean anything.
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 09:16 PM
Quote:
Originally Posted by Original Position
Eh, this seems to avoid the really difficult part of the prisoner's dilemma. The advantage of mutual cooperation in the Prisoner's Dilemma is obvious to everyone. The problem is developing a plausible model of rationality where doing so is correct. Here you try to work backwards--you say that cooperating is an instance of "superrationality." Fine. But what is "superrationality"? What principles of rationality are you adding (or subtracting) from the standard account in game theory so that cooperation in a single instance of the Parasite Problem (or Prisoner's Dilemma) is correct?
As I suggested to Aaron, you can probably get a clearer picture of superrationality by reading the Wiki I linked to. My understanding of it goes like this: Superrationality is the reasoning that provides for the greatest gain for everyone when knowing that everyone will employ that same reasoning.

If you find fault with that, again I suggest looking at how it's done in the Wiki link.

I don't think this notion of superrationality is all that unrealistic irl. Values like Duty, Honor, and Doing the Right Thing, allow us to function to a certain extent on a level of superrationality often times. For example, if everyone voted based on Sklansky's calculation of EV for voting, then nobody would vote. But people do vote. Maybe it makes them feel good. But maybe the reason it makes them feel good is because they believe it's their duty as a citizen, or it's just the right thing to do. And the reason they think it's the right thing to do is because they recognize that people voting is something that the country needs done for the benefit of the country.



PairTheBoard
Extracting irl morality solutions from solving games... Quote
05-29-2013 , 09:43 PM
Quote:
Originally Posted by Aaron W.
I think it still looks like you're assuming that everyone will cooperate, but you're just hiding that assumption behind some other words that don't mean anything.
The majority view probably agrees with you. I'm not so sure though. I'm seeing this more as a criticism of the notion of best reasoning whereby two equally rational people cannot escape the PD choice that screws them both. There just seems something lacking in such a notion of "rational" which deems it irrational for both to come to a decision that profits them both instead. If the better outcome results when both players equally employ this alternative reasoning then maybe the alternative reasoning ought to be given consideration as somehow better than the so called "best reasoning" that screws them both.



PairTheBoard
Extracting irl morality solutions from solving games... Quote
05-30-2013 , 12:36 AM
Quote:
Originally Posted by Original Position
I don't really know what you are trying to say anymore. Why is it necessary to have perfect information to use game-theoretical reasoning in our decision-making? Incidentally, game theory has real-life applications that go well beyond just rational decision-making as well--such as modelling the strategies of evolving populations in biology.
Well, the idea of some perfect information is a given in these games and then we try to extrapolate back into real life.

The problem is there is little perfect information "irl" - especially in terms of morality. I can see how it can be helpful in some situations, but with the world being so imperfect, I think we should be suspicious of any universal or absolute normative claim derived from such games.

Also, there seems to be a rather one dimensional player of the game. In limiting the "extraneous" factors, perhaps there is an elimination of elements of the players character that could be relevant to moral decision making. It seems having a player with only a concern for outcomes eliminates the morally relevant "character" of people. For example, one person doesn't cheat because the payout is not high enough, second person doesn't cheat because he is honest. I think we can judge the morality of these two people differently, but I am not sure how game theory would account for it.
Extracting irl morality solutions from solving games... Quote
05-30-2013 , 04:19 AM
Quote:
Originally Posted by PairTheBoard
The majority view probably agrees with you. I'm not so sure though. I'm seeing this more as a criticism of the notion of best reasoning whereby two equally rational people cannot escape the PD choice that screws them both. There just seems something lacking in such a notion of "rational" which deems it irrational for both to come to a decision that profits them both instead. If the better outcome results when both players equally employ this alternative reasoning then maybe the alternative reasoning ought to be given consideration as somehow better than the so called "best reasoning" that screws them both.
This tension is exactly why game theorists are interested in PD: the Nash equilibrium is not Pareto optimal. How is "superrational" different from "Pareto optimal"?
Extracting irl morality solutions from solving games... Quote
05-30-2013 , 10:41 AM
Quote:
Originally Posted by zumby
This tension is exactly why game theorists are interested in PD: the Nash equilibrium is not Pareto optimal. How is "superrational" different from "Pareto optimal"?
Superrationality involves both players making the same decision. ie, if given choice A or B, assuming that the other player chooses the exact same choice as me, in which will i be better off.

Pareto Optimality may or may not coincide. Its certainly easy to construct games or scenarios where the pareto optimal solutions involve different players making different decisions, which would not occur using superrationality.
Extracting irl morality solutions from solving games... Quote
05-30-2013 , 11:23 AM
Originally Posted by PairTheBoard
The majority view probably agrees with you. I'm not so sure though. I'm seeing this more as a criticism of the notion of best reasoning whereby two equally rational people cannot escape the PD choice that screws them both. There just seems something lacking in such a notion of "rational" which deems it irrational for both to come to a decision that profits them both instead. If the better outcome results when both players equally employ this alternative reasoning then maybe the alternative reasoning ought to be given consideration as somehow better than the so called "best reasoning" that screws them both.
------------------------



Quote:
Originally Posted by zumby
This tension is exactly why game theorists are interested in PD: the Nash equilibrium is not Pareto optimal. How is "superrational" different from "Pareto optimal"?
I don't know. This is the first I've heard of Pareto optimal. Sounds like pretty much the same idea as superrational. I'm not an expert on either concept though. It sounds like "superratonal" could be defined as the reasoning that seeks to produce the Pareto optimal solution. However, probably better to ask the guy who came up with the "superrational" idea. He might have tweeks to that definition in mind. Maybe fairness issues.


Pareto Efficiency - Wiki
Quote:
Originally Posted by Link
Pareto efficiency, or Pareto optimality, is a state of allocation of resources in which it is impossible to make any one individual better off without making at least one individual worse off. The term is named after Vilfredo Pareto (1848–1923), an Italian economist who used the concept in his studies of economic efficiency and income distribution[citation needed]. The concept has applications in academic fields such as engineering.

Given an initial allocation of goods among a set of individuals, a change to a different allocation that makes at least one individual better off without making any other individual worse off is called a Pareto improvement. An allocation is defined as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements can be made.

PairTheBoard
Extracting irl morality solutions from solving games... Quote
05-30-2013 , 11:49 AM
Quote:
Originally Posted by zumby
How is "superrational" different from "Pareto optimal"?
Pareto optimal seems to be about distributing a finite number of goods. I'm not sure if it can be applied to PD since that game has unbounded resources.
Extracting irl morality solutions from solving games... Quote

      
m