durkadurka, you only believe in free will because....(LC)
However, I will also admit to only a surface familiarity with the literature on the subject and a general aversion to metaphysics. At some point I will have to get much clearer on this subject--which is part of why I've been paying attention to this thread!
Compatibilism is almost fine because there's little overhead in what determinism is taken to mean. And I don't object in general to compatibilist-style accounts of agency. But, as a looming principle, I do not see how any compatibilist account of making choices is compatible with holding agents morally responsible for what they do. For example, I found Frankfurt's paper to be exceptionally irritating on moral responsibility.
On the other hand, what exactly is philosophically wrong/insufficient about pessimism? Where is the overreach in that argument? Yes, it's quite annoying to be forced to say things like "Hitler was not morally responsible for what he did" and even more annoying when you have no philosophical defense for judging others when they aggrieve you, but if pessimists are right then there's just no way around that.
Anyway, my point is not just that pessimism is right, but that it's an example of an easy and decisive philosophical result, which stands out because philosophy has few decisive results.
That is just repeating your claim that we can't figure it all out, right?
Knowing or not knowing how many parts we can remove from the brain before it ceases to function isn't particularly important when talking about what madnak means by "emergence."
It seems as if you're looking for an empirical answer to a non-empirical question.
Madnak: your definition of emergence is still extremely controversial.
There are multiple ontological interpretations of what you're describing. What sort of 'reality' do the 'emergent' properties have? How do they come about from their substrative causes? How do they interact with those causes?
It's a thin line between that and dualism.
There are multiple ontological interpretations of what you're describing. What sort of 'reality' do the 'emergent' properties have? How do they come about from their substrative causes? How do they interact with those causes?
It's a thin line between that and dualism.
A "gas" consists of "atoms" but a particular atom is not a gas.
"Water" consists of molecules of H2O, but a single H2O molecule would not be "water" in the same sense of the word. It is a "water molecule."
"Water" consists of molecules of H2O, but a single H2O molecule would not be "water" in the same sense of the word. It is a "water molecule."
But this isn't even important. What's important is that "water" has properties different from "a molecule of H2O." In a similar sense, "the brain" has properties different from "a neuron."
The relationship "consists of" is extremely nebulous with respect to the properties of the object. A "pair" "consists of" two objects. But it is not the properties of the objects which define "pairness." In fact, the entire *CONCEPT* of pairness exists in the absence of the objects from which it may be comprised.
So what reason would there be to expect that the properties of what an item "consists of" should be that which defines the properties of the item itself?
So what reason would there be to expect that the properties of what an item "consists of" should be that which defines the properties of the item itself?
The only thing I need to do in order to cut you off at the pass is to show that a brain doesn't necessarily have the same properties as a neuron.
In other words, just because a neuron is incapable of choice doesn't imply that the brain is incapable of choice.
No, choice was defined by you as: ""to take a course of action as a result of an internal process that enumerates and evaluates modeled actions."
I've shown that both a computer built to simulate chess and a simple circuit have these properties. You haven't shown there's anything "special" about the computer other than it is modeling chess moves rather than the on/off state of a light bulb. So what is special about chess?
In my example, the "action" is to turn the light bulb on or off. This is modeled by the circuitry. In your example the "action" is a chess move. This is modeled by the circuitry. Why is the chess move any more special with respect to the circuitry choosing or not chossing?
No, I didn't say the computer is the same as a circuit. I said it is built entirely on circuitry, and the way it models phenomena is through utilization of this complex circuitry. The same as my light bulb circuit. The light bulb circuit can't model chess moves simply because it isn't complicated enough. Add enough circuit components and it indeed can model chess moves.
If you're so hung up on chess. Then hook my circuit up to a simple program that receives inputs from a chess board. The program states, "if the game is in state X1, then close the switch and turn on the light bulb. And if the light bulb is on make move Y1. If the game is in state X2, then open the switch and turn off the light bulb. If the light bulb is off make move Y2."
What EXACTLY is different with my light bulb circuit modeling chess moves vs. your computer (that is made up of more complicated circuitry) which simply takes a greater range of inputs and gives an output, but rather than turning on/off a light bulb it turns on/off specific pixels on a computer screen?
Here's a hint: I used to have a chess board that had internal circuitry (a computer) which modeled chess. Guess how it would tell me to make the next move for it? It would light 2 bulbs up corresponding to locations on the chess board.
I've shown that both a computer built to simulate chess and a simple circuit have these properties. You haven't shown there's anything "special" about the computer other than it is modeling chess moves rather than the on/off state of a light bulb. So what is special about chess?
It's the actions that need to be modeled. Not the state of the light bulb.
You said that a computer is the same as circuit, and has the same properties. That if a computer can choose, then so can a circuit.
That's false. Just because a computer is made up of circuits doesn't mean it's just a more complicated circuit - it has properties that circuits don't have. A chess computer, for example, has the properties sufficient for choice.
That's false. Just because a computer is made up of circuits doesn't mean it's just a more complicated circuit - it has properties that circuits don't have. A chess computer, for example, has the properties sufficient for choice.
If you're so hung up on chess. Then hook my circuit up to a simple program that receives inputs from a chess board. The program states, "if the game is in state X1, then close the switch and turn on the light bulb. And if the light bulb is on make move Y1. If the game is in state X2, then open the switch and turn off the light bulb. If the light bulb is off make move Y2."
What EXACTLY is different with my light bulb circuit modeling chess moves vs. your computer (that is made up of more complicated circuitry) which simply takes a greater range of inputs and gives an output, but rather than turning on/off a light bulb it turns on/off specific pixels on a computer screen?
Here's a hint: I used to have a chess board that had internal circuitry (a computer) which modeled chess. Guess how it would tell me to make the next move for it? It would light 2 bulbs up corresponding to locations on the chess board.
In this case it's different; we can (and do) calculate the properties of "a gas" and the properties of "water" based on the properties of the individual molecules.
But this isn't even important. What's important is that "water" has properties different from "a molecule of H2O." In a similar sense, "the brain" has properties different from "a neuron."
But this isn't even important. What's important is that "water" has properties different from "a molecule of H2O." In a similar sense, "the brain" has properties different from "a neuron."
I am not denying that there are characterizations of objects that have new properties. I'm just saying that your understanding is so broad and vague that it doesn't actually mean anything.
Edit: You're treating "emergence" like a magic wand that you can wave and assert whatever you want at whatever level you want to assert it.
In other words, just because a neuron is incapable of choice doesn't imply that the brain is incapable of choice.
This isn't where I was going.
In my example, the "action" is to turn the light bulb on or off. This is modeled by the circuitry. In your example the "action" is a chess move. This is modeled by the circuitry. Why is the chess move any more special with respect to the circuitry choosing or not chossing?
With Aaron I've already defined a "model" as a "representation or simulation," and "evaluate" as "weigh and rank." This isn't happening. There is no representation of the actions nor simulation of the actions, and it's a stretch to claim the actions are being evaluated.
No, I didn't say the computer is the same as a circuit. I said it is built entirely on circuitry, and the way it models phenomena is through utilization of this complex circuitry. The same as my light bulb circuit. The light bulb circuit can't model chess moves simply because it isn't complicated enough. Add enough circuit components and it indeed can model chess moves.
If you're so hung up on chess. Then hook my circuit up to a simple program that receives inputs from a chess board. The program states, "if the game is in state X1, then close the switch and turn on the light bulb. And if the light bulb is on make move Y1. If the game is in state X2, then open the switch and turn off the light bulb. If the light bulb is off make move Y2."
While I might be willing to play on "evaluate," what's being evaluated is not actions (or models thereof) but the specific switch condition. And no, I don't buy that a switch in the "on" position is a model of "knight to c3." That's like saying a blank piece of paper is a map of New York City.
What EXACTLY is different with my light bulb circuit modeling chess moves vs. your computer (that is made up of more complicated circuitry) which simply takes a greater range of inputs and gives an output, but rather than turning on/off a light bulb it turns on/off specific pixels on a computer screen?
Does Madnak understand multiple realizibility?
And meaningless. You gain nothing by bringing it up.
When you brought up emergence, I had a feeling you were going to go with some nebulous statement that doesn't really say anything at all. I was right.
You are trying to get to a point where you have the "emergence" of a "model." You have that "automatically and robotically following instructions" is not a decision. You can say this of and/or gates and consequently of an if-then statement.
Does the existence of an if-then statement give you a model?
I brought up emergence is that this is where I thought you were going.
You are trying to get to a point where you have the "emergence" of a "model." You have that "automatically and robotically following instructions" is not a decision. You can say this of and/or gates and consequently of an if-then statement.
Does the existence of an if-then statement give you a model?
Mad: the very topic of models and modelling is highly controversial and is a big contemporary topic in the philosophy of science (though it's a topic that has been around in metaphysics for quite some time).
With Aaron I've already defined a "model" as a "representation or simulation," and "evaluate" as "weigh and rank."
I can evaluate f(x) = x^2 given any value of x without "weighing and ranking" it. I can evaluate if-then statements without "weighing and ranking" them.
And if you really really insist on this silly definition of evaluate, then fine, my circuit is "weighing and ranking" the state of the light bulb and switch:
1) If switch = closed, then light on is weighed higher/ranked higher than light off.
2) If switch = open, then light off is weighed higher/ranked higher than light on.
This isn't happening. There is no representation of the actions nor simulation of the actions, and it's a stretch to claim the actions are being evaluated.
The light bulb circuit doesn't model anything.
A conditional statement is not a model of a chess move. The chess moves aren't being evaluated at all.
If you are going to show that this is wrong, it is pretty clear you are going to have to explicitly show what is different about the circuitry in each case, and why in one situation the circuitry is "modeling" something and in another case it is not.
You could argue that the condition of the switch is being evaluated in the loosest sense, but the moves are not being evaluated.
The connection between the circuit and the chess moves is wholly arbitrary.
The circuit does not contain any internal representations of the chess moves - if it doesn't represent the chess moves then there is no modeling going on.
While I might be willing to play on "evaluate," what's being evaluated is not actions (or models thereof) but the specific switch condition.
And no, I don't buy that a switch in the "on" position is a model of "knight to c3." That's like saying a blank piece of paper is a map of New York City.
The difference isn't in the inputs and outputs, but in the process (evaluating modeled actions).
If you are going to show this is false, you need to show how one configuration of circuitry components "evaluates modeled actions" while another one cannot.
Matt, you're giving a good line of argument pushing on Madnak's position.
The worry is that his use of 'model' and what it is for something to model something is probably not very precise and may lead to a number of problems for his position.
The worry is that his use of 'model' and what it is for something to model something is probably not very precise and may lead to a number of problems for his position.
And meaningless. You gain nothing by bringing it up.
When you brought up emergence, I had a feeling you were going to go with some nebulous statement that doesn't really say anything at all. I was right.
You are trying to get to a point where you have the "emergence" of a "model." You have that "automatically and robotically following instructions" is not a decision. You can say this of and/or gates and consequently of an if-then statement.
Does the existence of an if-then statement give you a model?
When you brought up emergence, I had a feeling you were going to go with some nebulous statement that doesn't really say anything at all. I was right.
You are trying to get to a point where you have the "emergence" of a "model." You have that "automatically and robotically following instructions" is not a decision. You can say this of and/or gates and consequently of an if-then statement.
Does the existence of an if-then statement give you a model?
Or more important to the conversation, what does it take to get a "model"?
Still, I don't think it has much bearing on this debate. I think we're headed for Sorites territory, if we're not already there. Since I don't believe in the "reality" of boundaries between things, any boundary I accept can be pointed out as arbitrary. "Why did you draw the line here, instead of a little bit to the left or right?"
I don't need to invoke those beliefs here, but it would be silly of me to define my terms in a manner inconsistent with those beliefs. So I'm going to be inundated with "why does this count as a heap, but not that?"
There is a correspondence between the model and the thing it models. Typically this is the structure (spatial, physical, logical, informational) of the thing being modeled, some element of that structure will be preserved in the model itself.
A chess knight (the physical piece) can be described as a model horse; it has a flowing mane and a snout in the shape of a horse. A marble cannot.
I think I've made my point clear now and I'm not going to spend any more time discussing why a marble resembles a horse in some way and is therefore a model of a horse.
He's a monist...time to move on.
Everything is information. Why couldn't Leibniz think of that?
Information is able to be 'stuff'...that's amazing. How does it do that?
How does something being a sign (semiotics) lead to the production of matter from mere 'information'?
Everything is information. Why couldn't Leibniz think of that?
Information is able to be 'stuff'...that's amazing. How does it do that?
How does something being a sign (semiotics) lead to the production of matter from mere 'information'?
I never said that turning the light bulb on and turning the light bulb off were modeled by the circuitry. I said that the light bulb turning on/off is the ACTION being taken per the definition of choice that you gave. What is being MODELED by the circuit is the if-then statement "if switch = closed, light = on", "if switch = open, light = off".
The circuit is REPRESENTING the if-then statement. It is EVALUATING whether the switch is open or closed. The definition you gave for "evaluate" as "weigh and rank" is ridiculous. You can evaluate something without "weighing and ranking" it.
I can evaluate f(x) = x^2 given any value of x without "weighing and ranking" it. I can evaluate if-then statements without "weighing and ranking" them.
And if you really really insist on this silly definition of evaluate, then fine, my circuit is "weighing and ranking" the state of the light bulb and switch:
"to take a course of action as a result of an internal process that enumerates and evaluates modeled actions."
The actions must be modeled, then the actions must be enumerated and evaluated. The actions. Moving a knight to c3 is an action. Eating toast is an action. The state of a light bulb is not an action.
Yes they are. The moves are being ranked based on the initial state of the chess board.
If you are going to show that this is wrong, it is pretty clear you are going to have to explicitly show what is different about the circuitry in each case, and why in one situation the circuitry is "modeling" something and in another case it is not.
Yes they are.
As it is in a CHESS SIMULATION. The programmer is arbitrarily making a connection between the circuitry and the program to model the chess game because that's his goal. It's not like there is some magical set of circuits that are "chess simulation" circuits that possess special madnak free choice electrons.
Maybe this isn't the only possible way to program a chess computer - but a computer programmed in another way would not be making choices.
OK -- show me as explicitly as you can possibly muster how the circuitry in YOUR chess simulation computer contains an internal representation of the chess moves, while MY circuit that is explicitly linked to a chess board does not.
The program then evaluates moves - typically if it can do so, it will determine whether a particular move leads to a necessary win, a necessary draw, a necessary loss, or permutations resulting in multiple outcomes. Wins are preferred to draws are preferred to losses, and heuristics for determining the best move in a mixed scenario are written into the system. Among differing moves in a particular category, "ties" are won either by length of play (for example, the shortest possible win or the longest possible loss) or by a PRNG if there is no other difference - possibly even some kind of psychological algorithm, I don't know how advanced these things get.
Often, many permutations will lead to uncertain outcomes (ie the game will continue on beyond the computer's ability to calculate further). In these cases the computer can use a heuristic to rank the different moves (probably taking into account number of pieces remaining, general positional advantages, and so on).
You circuit, on the other hand, contains nothing resembling chess - it's a switch external to the chess board. And... that's it. It doesn't do the same things.
Either both circuits are "evaluating modeled actions" are neither of them are. Why? Because they are both circuits which utilize the same principles of electronics.
He's a monist...time to move on.
Everything is information. Why couldn't Leibniz think of that?
Information is able to be 'stuff'...that's amazing. How does it do that?
How does something being a sign (semiotics) lead to the production of matter from mere 'information'?
Everything is information. Why couldn't Leibniz think of that?
Information is able to be 'stuff'...that's amazing. How does it do that?
How does something being a sign (semiotics) lead to the production of matter from mere 'information'?
But the fundamental premise implies that matter is information, so that question is nonsensical.
Again as I've repeated, I'm a solipsist. Everything isn't information, everything is my mind. But that conversation is far away from this one and not something to get side tracked on.
There is a correspondence between the model and the thing it models. Typically this is the structure (spatial, physical, logical, informational) of the thing being modeled, some element of that structure will be preserved in the model itself.
Your notion of correspondence is once again vague enough to be basically anything. But sometimes you say it is a model, and sometimes it's not.
A chess knight (the physical piece) can be described as a model horse; it has a flowing mane and a snout in the shape of a horse. A marble cannot.
I think I've made my point clear now and I'm not going to spend any more time discussing why a marble resembles a horse in some way and is therefore a model of a horse.
Solipsism isn't monism, ducy?
There is a correspondence between up and down spin and the two sides of a coin. Does this mean that spin is a "model" of a coin? Does it "model" the states of "on" and "off"? Does it "model" left/right?
Your notion of correspondence is once again vague enough to be basically anything. But sometimes you say it is a model, and sometimes it's not.
Your notion of correspondence is once again vague enough to be basically anything. But sometimes you say it is a model, and sometimes it's not.
Sorites talk is inane. It doesn't matter where the exact line between "red" and "orange" is. Some shades are clearly red, others clearly orange, and others in-between. If you want to start a thread on "whoa dude, like, where do you draw the line?" then go right ahead. But don't insult me with a thousand questions in the vein of "what about this color, is it red?" "How about this other one here? Red, or orange?" "Oh, trick question, this one was actually purple." That's bull**** designed to drag down the debate so you can keep it on a rhetorical footing.
If you want to clarify, then ask questions that are conceptually relevant. I already told you I'm not going to bother answering questions in the form "is x level of detail 'enough' detail?"
No, it's not.
Solipsism: we can't know that other minds exist.
Monism: all there is is 'mental' stuff (which is basically Berkeley-ish).
Your position (?): All there is is YOUR mind.
Totally different to go from "all I can know is what is present to my consciousness" to "all there is is my consciousness."
Solipsism: we can't know that other minds exist.
Monism: all there is is 'mental' stuff (which is basically Berkeley-ish).
Your position (?): All there is is YOUR mind.
Totally different to go from "all I can know is what is present to my consciousness" to "all there is is my consciousness."
Feedback is used for internal purposes. LEARN MORE