durkadurka, you only believe in free will because....(LC)
Why is modeling a chess move (which is an action) different then modeling turning on a light (which is an action)?
Not according to the definitions I've presented.
Furthermore, it is not evaluating the actions. My definition is clear on this - the actions are what must be modeled, and these modeled actions are what must be evaluated. Your switch doesn't do that.
I'm not using that sense of the word "evaluate."
The actions must be modeled, then the actions must be enumerated and evaluated. The actions. Moving a knight to c3 is an action. Eating toast is an action. The state of a light bulb is not an action.
These word games are fun:
The state of a knight being on c3 isn't an action.
The state of having toast in your stomach isn't an action.
See? I can throw "the state" in the sentence and change the wording around and NOTHING is an action. Awesome!
What the programmer does is irrelevant. The logic of the chess game is contained within the chess program (and within the memory of the actively running program).
In particular, the program runs through each legal move, models each one in succession, and evaluates them in order to determine the move to actually make. Your circuit doesn't do these things.
I don't know the terminology. I do know that the computer typically has a tactics "engine" that calculates the board permutations following from a position. It also has a "library" of recognized board positions that have been analyzed prior to play, and it can derive conclusions on the basis of those. Naturally, it has the logic of chess (winning conditions, legal moves, etc) programmed right inside it.
The program then evaluates moves - typically if it can do so, it will determine whether a particular move leads to a necessary win, a necessary draw, a necessary loss, or permutations resulting in multiple outcomes. Wins are preferred to draws are preferred to losses, and heuristics for determining the best move in a mixed scenario are written into the system. Among differing moves in a particular category, "ties" are won either by length of play (for example, the shortest possible win or the longest possible loss) or by a PRNG if there is no other difference - possibly even some kind of psychological algorithm, I don't know how advanced these things get.
Often, many permutations will lead to uncertain outcomes (ie the game will continue on beyond the computer's ability to calculate further). In these cases the computer can use a heuristic to rank the different moves (probably taking into account number of pieces remaining, general positional advantages, and so on).
You circuit, on the other hand, contains nothing resembling chess - it's a switch external to the chess board. And... that's it. It doesn't do the same things.
The program then evaluates moves - typically if it can do so, it will determine whether a particular move leads to a necessary win, a necessary draw, a necessary loss, or permutations resulting in multiple outcomes. Wins are preferred to draws are preferred to losses, and heuristics for determining the best move in a mixed scenario are written into the system. Among differing moves in a particular category, "ties" are won either by length of play (for example, the shortest possible win or the longest possible loss) or by a PRNG if there is no other difference - possibly even some kind of psychological algorithm, I don't know how advanced these things get.
Often, many permutations will lead to uncertain outcomes (ie the game will continue on beyond the computer's ability to calculate further). In these cases the computer can use a heuristic to rank the different moves (probably taking into account number of pieces remaining, general positional advantages, and so on).
You circuit, on the other hand, contains nothing resembling chess - it's a switch external to the chess board. And... that's it. It doesn't do the same things.
It sure sounds like you're basically saying the chess computer is choosing because the algorithms it runs (and thus the circuitry necessary to run those algorithms) are more complex. They're both based on fundamentally the same principles, though.
Uh, no. The fact that they are both composed of circuits doesn't imply that they do the same things.
Sorites talk is inane. It doesn't matter where the exact line between "red" and "orange" is. Some shades are clearly red, others clearly orange, and others in-between. If you want to start a thread on "whoa dude, like, where do you draw the line?" then go right ahead. But don't insult me with a thousand questions in the vein of "what about this color, is it red?" "How about this other one here? Red, or orange?" "Oh, trick question, this one was actually purple." That's bull**** designed to drag down the debate so you can keep it on a rhetorical footing.
How about this. Here is a simple game:
You start with 4 beans in a pile. There are two players. The first player can take 1 or 2 beans. Then the second player does the same. The game continues until there are no beans left. The player who takes the last bean wins.
Yes or no, can you work with this example?
Perhaps you can help me understand the metaphilosophy of the free will debate because I'd rather be wrong than naive about what's going on (also if you recall, I have hangups about the lack of progress in philosophy)
<snip>
<snip>
I was being serious when I said I am not very familiar with the literature on this topic, so I don't know how helpful I can be. I've read up through Frankfurt and Watson, but don't know Fischer or Kane's work, or that of contemporary incompatibilists very well at all. That being said, your summary seems like a fair statement of the problems facing each view except, I suppose, your own. The problem is that people do very strongly believe in moral responsibility. Furthermore, as pointed out by P. Strawson, it becomes difficult to make sense of much more of our emotional life if we don't believe in some form of free will. Seems to me that these things need as much explanation as anything else and the pessimist fails to do so.
I suppose what the pessimist (by the way, where does this term come from?) can do is accept something like an error theory regarding morality. But that is only a promissory note, not a theory...
And I'm reiterating the point that it's not epistemological. That is, there's nothing to "figure out" regarding madnak's* concept of "emergence."
Knowing or not knowing how many parts we can remove from the brain before it ceases to function isn't particularly important when talking about what madnak means by "emergence."
Knowing or not knowing how many parts we can remove from the brain before it ceases to function isn't particularly important when talking about what madnak means by "emergence."
It seems as if you're looking for an empirical answer to a non-empirical question.
I put in the *s because I am tempted to try to prove lack of freedom in Durka and Aaron W (specifically) purely through an inability to "choose" to cease carrying on a conversation, despite stated desire to do so.
Back to what I think is a meatier discussion: Do you think that either emergence or dualism is necessary for free will? Would either one suffice in and of itself? In other words, would dualism be sufficient? Would emergence be sufficient? Or is emergence (or dualism) + <insert something here> sufficient?
Just to make sure I understand you and Durka, it seems that you are not against the idea of emergence, just madnak's depiction of it, right?
Nuh uh. Emergence (as far as consciousness goes) is not a non-empirical question. To the point, there is a body of research (still in the baby stage - maybe 15 years of research) trying to address the nature of consciousness in terms of brain activity.
Madnak's* view might be non-empirical, but his writing is so unorganized that I can interpret it as meaning quite a few different, and contradictory things.
I put in the *s because I am tempted to try to prove lack of freedom in Durka and Aaron W (specifically) purely through an inability to "choose" to cease carrying on a conversation, despite stated desire to do so.
I put in the *s because I am tempted to try to prove lack of freedom in Durka and Aaron W (specifically) purely through an inability to "choose" to cease carrying on a conversation, despite stated desire to do so.
Back to what I think is a meatier discussion: Do you think that either emergence or dualism is necessary for free will? Would either one suffice in and of itself? In other words, would dualism be sufficient? Would emergence be sufficient? Or is emergence (or dualism) + <insert something here> sufficient?
Dualism? No. It's conceivable that there is a physical choice mechanism. I haven't the slightest idea what it would look like, but it's not outside the realm of possibility.
Just to make sure I understand you and Durka, it seems that you are not against the idea of emergence, just madnak's depiction of it, right?
Furthermore, as pointed out by P. Strawson, it becomes difficult to make sense of much more of our emotional life if we don't believe in some form of free will. Seems to me that these things need as much explanation as anything else and the pessimist fails to do so.
I suppose what the pessimist (by the way, where does this term come from?) can do is accept something like an error theory regarding morality. But that is only a promissory note, not a theory...
The origin or popularization of the term might actually be from Freedom and Resentment, "Some philosophers say they do not know what the thesis of determinism is. Others say, or imply, that they do know what it is. Of these, some — the pessimists perhaps — hold that if the thesis is true, then the concepts of moral obligation and responsibility really have no application, and the practices of punishing and blaming, of expressing moral condemnation and approval, are really unjustified."
Although this might be read as grouping libertarians with the pessimists. I'd say pessimistic incompatibilism is just the view that free will is incoherent regardless of the truth of determinism.
Although this might be read as grouping libertarians with the pessimists. I'd say pessimistic incompatibilism is just the view that free will is incoherent regardless of the truth of determinism.
No, the pessimist only applies to incompatibilists IF the deterministic thesis is true. Basically, it's some rhetorical word for the incompatibilist. Calling them a pessimist is unfair. They'd call themselves realists.
edit: actually, no, what you said is not fully correct. Pessimistic incompatibilism is the view that compatibilism does not give us the kind of moral responsibility we want or think we have (essentially because people do not have ultimate control over what they do) and libertarianism doesn't give us a coherent account of how indeterminism gives us the right kind of ultimate control (the point about randomness that we've touched on already). Or, put more directly, pessimism is that free will/moral responsibility are incoherent regardless of the truth of determinism. So it's not just pessimism about the possibility of free will given determinism, it's pessimism about the possibility of free will given indeterminism.
I was just saying that Strawson's phrasing in the quoted remark might lead one to be confused about what pessimism is.
edit: actually, no, what you said is not fully correct. Pessimistic incompatibilism is the view that compatibilism does not give us the kind of moral responsibility we want or think we have (essentially because people do not have ultimate control over what they do) and libertarianism doesn't give us a coherent account of how indeterminism gives us the right kind of ultimate control (the point about randomness that we've touched on already). Or, put more directly, pessimism is that free will/moral responsibility are incoherent regardless of the truth of determinism. So it's not just pessimism about the possibility of free will given determinism, it's pessimism about the possibility of free will given indeterminism.
edit: actually, no, what you said is not fully correct. Pessimistic incompatibilism is the view that compatibilism does not give us the kind of moral responsibility we want or think we have (essentially because people do not have ultimate control over what they do) and libertarianism doesn't give us a coherent account of how indeterminism gives us the right kind of ultimate control (the point about randomness that we've touched on already). Or, put more directly, pessimism is that free will/moral responsibility are incoherent regardless of the truth of determinism. So it's not just pessimism about the possibility of free will given determinism, it's pessimism about the possibility of free will given indeterminism.
What you really want to point to is more like a "pessimist" position in general separate from incompatibilism/compatibilism. If someone holds that free will is incoherent whether determinism is true (or not) then they're not coextensive with incompatibilists.
A "pessimistic" incompatibilist is silly (superfluous). This suggests that there are pessimistic compatibilists (which clearly there are not). ducy?
What I said was correct. There are two incompatibilist positions: libertarianism and hard determinism. Both hold that if determinism is true, then responsibility is impossible. That's not "pessimistic." Using that word is just rhetorical. There's nothing to distinguish between an incompatibilist and a "pessimistic incompatibilist": they're the same thing! Adding "pessimist" or "pessimistic" is merely rhetorical and doesn't distinguish a separate position.
A "pessimistic" incompatibilist is silly (superfluous). This suggests that there are pessimistic compatibilists (which clearly there are not). ducy?
edit: fixed a sentence in first paragraph
I know you didn't make it up; that doesn't mean it's not rhetorical.
Look, the pessimistic meta-induction in science makes sense to use 'pessimistic.' But it doesn't here. There's nothing 'pessimistic' in any of these positions. I think that you have a very hard time separating descriptive from prescriptive statements.
"Pessimistic" incompatibilist still doesn't pick out a unique position. What you're trying to pick out is someone who thinks that whether determinism or indeterminism is the case, neither are sufficient for free will. That's not pessimistic in any sense. That's a claim about the incoherence of free will period.
Look, the pessimistic meta-induction in science makes sense to use 'pessimistic.' But it doesn't here. There's nothing 'pessimistic' in any of these positions. I think that you have a very hard time separating descriptive from prescriptive statements.
"Pessimistic" incompatibilist still doesn't pick out a unique position. What you're trying to pick out is someone who thinks that whether determinism or indeterminism is the case, neither are sufficient for free will. That's not pessimistic in any sense. That's a claim about the incoherence of free will period.
Look, the pessimistic meta-induction in science makes sense to use 'pessimistic.' But it doesn't here. There's nothing 'pessimistic' in any of these positions. I think that you have a very hard time separating descriptive from prescriptive statements.
"Pessimistic" incompatibilist still doesn't pick out a unique position. What you're trying to pick out is someone who thinks that whether determinism or indeterminism is the case, neither are sufficient for free will. That's not pessimistic in any sense. That's a claim about the incoherence of free will period.
http://www.bu.edu/law/central/jd/org...ments/KANE.pdf
last paragraph
"In conclusion, I am not what Dworkin calls a “pessimistic incompatibilist” –
one who believes free will is incompatible with determinism, but who thinks
that incompatibilist free will is impossible, so that no one is ultimately
responsible for doing what they do."
That's Robert Kane and Dworkin, I imagine you will man up and accept that you're acutely mistaken about what the term means.
last paragraph
"In conclusion, I am not what Dworkin calls a “pessimistic incompatibilist” –
one who believes free will is incompatible with determinism, but who thinks
that incompatibilist free will is impossible, so that no one is ultimately
responsible for doing what they do."
That's Robert Kane and Dworkin, I imagine you will man up and accept that you're acutely mistaken about what the term means.
On the other hand, many utilitarians will claim that punishment can only be justified because it leads to better consequences, not just for past sins. On this account, I don't see how punishment in hell can be justified (unless you are an escapist about hell I suppose).
But on the retributive view of punishment, then yes, I would say that punishment in hell (more properly purgatory) is justified on compatibilist grounds.
That was a good paper but at the risk of making a facile analogy, it becomes difficult to make sense of much more of our spiritual life if we don't believe in some form of God, but that's not an argument for the existence of God.
On the other hand, I'm not sure it is possible for us to act as if we or those around us are completely controlled by prior, purely physical or psychological causes. I'm not sure it is impossible, but I don't see how it would be done.
I think a better analogy is consciousness. Many people are very hesitant to give up a belief in consciousness because it seems so central to our experience of the world, so how are we to make sense of theories that seem to deny the real existence of consciousness?
Pessimism about free will or pessimistic incompatibilism; P. Strawson's son Galen has his "Basic Argument" against moral responsibility, here's a summary http://www.rep.routledge.com/article/V014SECT3 I don't think this is the optimal argument but it's close. Some Nietzsche there that you've probably heard before.
On the contrary. It is really not that difficult to make sense of spiritual life without believing in God. We can see this because even many people who have highly developed spiritual lives don't believe in God.
On the other hand, I'm not sure it is possible for us to act as if we or those around us are completely controlled by prior, purely physical or psychological causes. I'm not sure it is impossible, but I don't see how it would be done.
On the other hand, I'm not sure it is possible for us to act as if we or those around us are completely controlled by prior, purely physical or psychological causes. I'm not sure it is impossible, but I don't see how it would be done.
Concenring your second point, I agree that it becomes very difficult to make sense of our prephilosophical emotional life if one thinks moral responsibility is incoherent (what would then 'explain' emotional life would just be cognitive science), but that doesn't make the argument against the possibility of moral responsibility theoretically weaker.
PS If you are going to answer the first question, no Frankfurt on the rejection of the PAP unless you really think it's a good argument
Needed clarification: Durka's and (I think) Aaron W.'s "responsibility" means specifically "the ultimate cause of an event." (correct if I am misinterpreting you, Durka).
In other words, "I am a slut because my step-daddy done touched me in a wrongful way," is not meaningful, given free will.*
Free will negates any "because" statement for all free beings.
*I made the worst possible strawman available for you to attack.
In other words, "I am a slut because my step-daddy done touched me in a wrongful way," is not meaningful, given free will.*
Free will negates any "because" statement for all free beings.
*I made the worst possible strawman available for you to attack.
Honestly, I think I derive quite a bit of entertainment arguing with madnak. In sort of that intellectual-frustrating sort of way.
Emergence? It depends on what you mean. I suspect it's not necessary because it's not at all clear that complex structures are necessary for "will" to exist.
Dualism? No. It's conceivable that there is a physical choice mechanism. I haven't the slightest idea what it would look like, but it's not outside the realm of possibility.
I don't really know because I don't think I have a clear sense of what "emergence" really is (if it's supposed to be a non-trivial concept).
I've very specifically said that the agent is never the 'ultimate' cause of acts for which they're responsible (if you mean it in the Strawson sense).
The agent is like a kayaker in rapids: they have only some control over their direction but not 'ultimate' control.
The agent is like a kayaker in rapids: they have only some control over their direction but not 'ultimate' control.
OK, what the heck do you mean by "modeling an action" then? Turning on a light is an action. If-then statements are used ALL THE TIME in programming and computer modeling.
Why is modeling a chess move (which is an action) different then modeling turning on a light (which is an action)?
Why is modeling a chess move (which is an action) different then modeling turning on a light (which is an action)?
OK -- how do you model an action then? Turning on a light is an action. Everyone knows this. If-then statements are used all the time in computer modeling. So if I put the action of turning on a light IN a simple model, how am I NOT MODELING THE ACTION?
No of course you aren't. And then when I use "weigh and rank" to analyze your arguments you didn't mean THAT sense of the words "weigh and rank".
LOL. I love how you switch the wording around. The state of a light bulb is not an action. Check this out: if you change the wording to "turning on or turning off a light bulb" THAT is an action.
The state of a knight being on c3 isn't an action.
The state of having toast in your stomach isn't an action.
See? I can throw "the state" in the sentence and change the wording around and NOTHING is an action. Awesome!
The state of having toast in your stomach isn't an action.
See? I can throw "the state" in the sentence and change the wording around and NOTHING is an action. Awesome!
The logic of my simplified chess game is contained within my circuit. It just happens to be hooked up to a couple light bulb rather than some computer monitor pixels.
If the chess game is simple enough, yes it does. Just take the game my chess circuit to be modeling to be an end game where there are only 2 legal moves. If you want it to model more complicated chess scenarios just add some circuit elements until you're done. It's choosing the moves in every instance according to your definition of choice.
So if we restrict our chess game to a very simple instance where it can only model and evaluate and pick between two possible legal moves (and both result in the game ending within say, 2 moves), is it no longer choosing because it's not utilizing all these complex board permutations? Because my simple chess circuit + light bulb can do that; if not in its current state then definitely with some simple variation of it.
It sure sounds like you're basically saying the chess computer is choosing because the algorithms it runs (and thus the circuitry necessary to run those algorithms) are more complex. They're both based on fundamentally the same principles, though.
If yes, then okay, by that definition "complexity" is the difference between your circuit and my computer. I wouldn't call it complexity, but if that's what we're doing, then your circuit is not complex enough to model an action such as turning on a light bulb.
No, it's not.
Solipsism: we can't know that other minds exist.
Monism: all there is is 'mental' stuff (which is basically Berkeley-ish).
Your position (?): All there is is YOUR mind.
Totally different to go from "all I can know is what is present to my consciousness" to "all there is is my consciousness."
Solipsism: we can't know that other minds exist.
Monism: all there is is 'mental' stuff (which is basically Berkeley-ish).
Your position (?): All there is is YOUR mind.
Totally different to go from "all I can know is what is present to my consciousness" to "all there is is my consciousness."
What happens if we include the premise "if I can't know of it, even in theory, then it does not exist?"
No, but it seems like you get the picture. Maybe not.
If this was the point, then I don't like using the spin of a particle as a model because it's not very detailed. Can the spin of a particle be considered a model of the outcome of a coin flip? Probably. But that's stretching the term awfully far, I can see no reason to use "model" in that context most of the time. Maybe if particle spin is used to represent a "bit" in memory. In terms of whether it's "really" a model, in philosophic terms, yes, I'd say that technically it is.
I'm not sure where you're going with the example, but yes, it seems fine to me.
I just need a basic example of a model. Something concrete. Something where you can't wave your magic "emergence" wand and pretend like the details you're ignoring aren't there.
How about this. Here is a simple game:
You start with 4 beans in a pile. There are two players. The first player can take 1 or 2 beans. Then the second player does the same. The game continues until there are no beans left. The player who takes the last bean wins.
Yes or no, can you work with this example?
You start with 4 beans in a pile. There are two players. The first player can take 1 or 2 beans. Then the second player does the same. The game continues until there are no beans left. The player who takes the last bean wins.
Yes or no, can you work with this example?
This all depends on whether the concept of a "model" is arbitrary. That is, if there's nothing that makes it a model (or prevents it from being a model) other than your declaration that it is or is not a model.
If the model concept is arbitrary, then I "get the picture" because there's nothing to get. There's nothing of value to say against fiat. If it's not arbitrary, then somewhere I should be able to find what the essential features of a "model" are, and then take it back to your definition and see whether I think it says anything meaningful.
Consider the following pseudo-programming code:
If there are four beans, then take one bean.
If there are three beans, then take one bean.
If there are two beans, then take two beans.
If there is one bean, then take one bean.
If a computer is programmed in this manner, does it count as having modeled the game?
If the model concept is arbitrary, then I "get the picture" because there's nothing to get. There's nothing of value to say against fiat. If it's not arbitrary, then somewhere I should be able to find what the essential features of a "model" are, and then take it back to your definition and see whether I think it says anything meaningful.
I'm not sure where you're going with the example, but yes, it seems fine to me.
If there are four beans, then take one bean.
If there are three beans, then take one bean.
If there are two beans, then take two beans.
If there is one bean, then take one bean.
If a computer is programmed in this manner, does it count as having modeled the game?
Consider the following pseudo-programming code:
If there are four beans, then take one bean.
If there are three beans, then take one bean.
If there are two beans, then take two beans.
If there is one bean, then take one bean.
If a computer is programmed in this manner, does it count as having modeled the game?
If there are four beans, then take one bean.
If there are three beans, then take one bean.
If there are two beans, then take two beans.
If there is one bean, then take one bean.
If a computer is programmed in this manner, does it count as having modeled the game?
As a model of the actions themselves, no.
Feedback is used for internal purposes. LEARN MORE