Open Side Menu Go to the Top
Register
durkadurka, you only believe in free will because....(LC) durkadurka, you only believe in free will because....(LC)

06-15-2010 , 10:56 PM
Quote:
Originally Posted by Aaron W.
Again, time is seemingly plentiful.



I can accept that you're working off of theories and see where it takes us. Unless you throw out something that's completely weird, I doubt this will be an issue. I'm not sure how those theories (presumably, theories about the [functioning of the human brain) will play into a discussion about how to make sense of these concepts under determinism, but there's no harm in it.

Can you briefly summarize where you are thinking of taking the conversation?
Me briefly summarizing is not natural, but I will try.

I expect that we are suffering under a difference of definition and a difference of consequence. I think that exploring these differences may fun and is critical to both of our beliefs. I sincerely hope that we come to an agreement, but the process should be fun either way.

Quote:
This is the best way to go.
HEAR YE ALL. IGNORING YOUR POSTS IS NEITHER AGREEMENT NOR DISAGREEMENT

Quote:
That's okay. I haven't taken a single course in psychology. But as long as the concepts are not too convoluted, I think I'll be able to keep up.
Just let me know if my arguments are convoluted. It is the job of the claimer to prove the claim.

Some of it might need further explanation. I will do my best to be the explainer, when necessary.

I hope I haven't missed this from elsewhere, but how extensive is your psychological background?[/QUOTE]

ABD Clinical Psychology. My focus was on ethnic differences on personality tests. I changed (partially) the way in which ethnic differences in test results are viewed. Prior to my work, the objective was to show that different groups scored the same in order to show ethnic non-bias. Aftter my work, in which I showed that ethnic correlation is unimportant as compared to other, more important, variables I got kind of cranky and dropped out of research..
durkadurka, you only believe in free will because....(LC) Quote
06-15-2010 , 11:15 PM
(Cool, this thread is the current overall SMP forum leader in number of replies across ~190 pages of current and past threads)
durkadurka, you only believe in free will because....(LC) Quote
06-15-2010 , 11:47 PM
Indeed...it definitely wasn't (LC)
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 03:50 AM
Quote:
Originally Posted by durkadurka33
I think that this is a good question. I'm not sure. In the case of people, I think that we can make irrational decisions. So, when we choose the utility maximizing option, we are being rational and we choose that option for those reasons, but we had the ability to do otherwise (ie, be irrational). So, just because something is utility maximizing doesn't rule-out that it was free.
The utility maximization wasn't really important wrt/ to my point.
What I'm getting at: Deep Blue is a deterministic machine that is confronted with a set op options (all the valid moves) and chooses the one it prefers based on an algorithm that is unique to that one machine. Yes, the laplacian daemon could have told you its next move in 1907, but thats only because it knows Deep Blue so well. Deep Blue would choose differently if it was programmed differently, ceteris paribus. A different person would choose differently if it was (exactly!) in Deep Blue's position. There is something about the inner workings of this machine that is necessary for a specific event to play out. How is it not responsible, then?
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 07:25 AM
Actually, the utility maximization IS completely relevant. You're begging the question that a person wouldn't do the same thing that Deep Blue did for every move. If a person could compute the utility maximizing move at each instance to the same degree as Deep Blue we would want to say that the person is acting freely but Deep Blue isn't; therefore, we need to distinguish between them and it clearly can't be that Deep Blue is maximizing utility when it acts as it does.

Deep Blue isn't responsible because it's determined. How have I not answered this? It isn't "choosing" since it can't select from disjuncts of 'options': it will only ever select one of them (the output of the algorithm). It's not 'free' to 'choose' alternatives in the actual world.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 07:49 AM
Quote:
Originally Posted by durkadurka33
Actually, the utility maximization IS completely relevant. You're begging the question that a person wouldn't do the same thing that Deep Blue did for every move. If a person could compute the utility maximizing move at each instance to the same degree as Deep Blue we would want to say that the person is acting freely but Deep Blue isn't; therefore, we need to distinguish between them and it clearly can't be that Deep Blue is maximizing utility when it acts as it does.

Deep Blue isn't responsible because it's determined. How have I not answered this? It isn't "choosing" since it can't select from disjuncts of 'options': it will only ever select one of them (the output of the algorithm). It's not 'free' to 'choose' alternatives in the actual world.
According to MY definitions it does have options and it does choose. You claim that my definitions are inconsistent... first you challenged my definition if choosing by arguing that it cannot "select from a set of options" because there is only one option, the one he actually chose. I responded by saying that according to my definition his options were all the valid moves (as it is programmed to consider those). While it always ends up selecting the same option, another person would actually choose differently, so we can say that there are different options and that different choosers choose differently depending on who they are (i. e. depending on the algorithm running in their brain). Now, instead of explaining why MY definition of "option" is inconsistent you merely used YOUR definition to reject my claim that db was choosing. i. e. we are back to square one.

Please explain why my definition of option is inconsistent.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 08:03 AM
You can't demonstrate that the COMPUTER is choosing just because a person could choose.

That's like saying that a rock can kill a person, so it's responsible for murder because a person can be responsible for murder.

I agree that a person in the same spot as Blue could choose differently than the computer. However, the key point is that the person in Blue's spot could really choose more than one of the 'valid' moves (= disjuncts). This is choice: selection from a range of options. To Blue, there isn't actually a 'range' even though you define a 'range of valid moves' each of the valid moves is not an 'option' to Blue since its algorithm will forever only output the same move...so the other valid moves aren't live 'options' to Blue. But, they're live options to the person (probably because the person isn't perfectly rational).
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 08:23 AM
Quote:
Originally Posted by lagdonk
(Cool, this thread is the current overall SMP forum leader in number of replies across ~190 pages of current and past threads)
Yes, but in number of views, no one can compete with the homework and video threads. (Well, almost no one. )
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 11:04 AM
Quote:
Originally Posted by durkadurka33
I agree that a person in the same spot as Blue could choose differently than the computer. However, the key point is that the person in Blue's spot could really choose more than one of the 'valid' moves (= disjuncts). This is choice: selection from a range of options. To Blue, there isn't actually a 'range' even though you define a 'range of valid moves' each of the valid moves is not an 'option' to Blue since its algorithm will forever only output the same move...so the other valid moves aren't live 'options' to Blue. But, they're live options to the person (probably because the person isn't perfectly rational).
Could we -
a) add a quantum effect to the computer, or
b) acknowledge the hormonal causes in the person,

to get the situations closer?
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 11:11 AM
That is where things get really difficult for the libertarian. IMO, it's the biggest problem.

Quantum effects (ie, randomness) can't be sufficient for 'choice.' But, hormonal (or whatever chance) effects on actions shouldn't rule-out choice. So, how to explicitly lay out this in a libertarian theory is difficult (but I suspect possible).

Kane has given an attempt at the latter (though I think ultimately unsuccessful). There's a good case in Caro's book of tells and a similar case in Greenstein's Ace in the Hole where a given hand could play out in radically different ways if you were to replay the hand given the same initial conditions and throw in some small changes due to randomness. (ie. lots of our decisions involve chaotic influences) But we want to say that the actions in both cases are 'choices' and freely chosen even though there ARE random influences.

Personally, I think of the proper libertarian position as a kayaker in rapids. The kayaker has some but not complete control over their direction. And, if the kayaker were to run the course again, some chaotic or random changes in the water's movement may have drastic impacts on the kayaker's decision (their use of some stimuli rather than other in making a decision) and yet their choices are 'free' even though they are not the 'ultimate' cause of their choices.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 11:16 AM
Quote:
Originally Posted by durkadurka33
You can't demonstrate that the COMPUTER is choosing just because a person could choose.
I can, because I am defining "choosing" (and "option") in a way that both computers and people choose.

Quote:
That's like saying that a rock can kill a person, so it's responsible for murder because a person can be responsible for murder.
No, that's the crucial point!
A rock does not choose in the sense Deep Blue chooses or a person chooses.
A rock does not run complicated choosing-algorithms in order to decide where to fall.

Quote:
I agree that a person in the same spot as Blue could choose differently than the computer. However, the key point is that the person in Blue's spot could really choose more than one of the 'valid' moves (= disjuncts).
I am assuming that both the person and Deep Blue act according to deterministic laws.

The person chooses differently because she doesn't run the same algorithm as Deep Blue. That's why there are options and responsability: Different people (or machines) choose different actions, i.e. the outcome depends on the internal decision mechanism of the person/machine.

Quote:
This is choice: selection from a range of options. To Blue, there isn't actually a 'range' even though you define a 'range of valid moves' each of the valid moves is not an 'option' to Blue since its algorithm will forever only output the same move...
But so what? Why is it only an option if it might change its mind in exactly the same spot?

I define an option as an action that could be chosen by a different person/maching in exactly the same situation (i.e. all the external states of the world are the same but the internal decision mechanism is different).

What's the problem with this definition and how does it preclude responsability?

Quote:
so the other valid moves aren't live 'options' to Blue. But, they're live options to the person (probably because the person isn't perfectly rational).
Again, I'm giving you the definition of "option" and "choosing" that I apply in my deterministic model.

I know that you disagree with my definitions, but you have yet to give me a good reason why yours are better than mine.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 11:27 AM
Quote:
Originally Posted by durkadurka33
Personally, I think of the proper libertarian position as a kayaker in rapids. The kayaker has some but not complete control over their direction. And, if the kayaker were to run the course again, some chaotic or random changes in the water's movement may have drastic impacts on the kayaker's decision (their use of some stimuli rather than other in making a decision) and yet their choices are 'free' even though they are not the 'ultimate' cause of their choices.
Which part of a persons mental makeup would the kayaker represent? Isn't that part, call it D, also another set of rapids running in their D-node?
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 01:03 PM
Quote:
Originally Posted by MrBlah
A rock does not choose in the sense Deep Blue chooses or a person chooses.
A rock does not run complicated choosing-algorithms in order to decide where to fall.
Consider the following rolling ball sculpture.

http://www.youtube.com/watch?v=5c777kLwG4U

Even though there is a single entry point for the marbles, there are actually gates which allow the marbles to take one of three paths. You can imagine a much more complex machine of this type involving thousands of gates and millions of paths.

Would you say that this hypothetical gigantic machine is choosing the paths of the marbles?

Would you say that the machine with only three paths is choosing?

What if there were only two paths?

What if there were two paths, but the machine is set up in such a way that it never takes one of those paths?

If these machines are not making decisions, what is the essential difference between Big Blue's decision-making algorithms, and this machine's decision-making algorithms?

(If you believe this machine is making decisions, then I think there is probably a fundamental disagreement on the actual nature of "making a decision.")
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 01:21 PM
Quote:
Originally Posted by luckyme
Which part of a persons mental makeup would the kayaker represent? Isn't that part, call it D, also another set of rapids running in their D-node?
Only if you think that mental states are completely physically determined brain states.

I think that mental states are emergent properties that are causally efficacious.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 03:58 PM
So you think progress in neuroscience may eventually shed some light on the validity of this conception of yours that mental states are emergent and/or not completely physically determined, or is this not an empirical matter?
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 04:16 PM
Quote:
Originally Posted by Aaron W.
Consider the following rolling ball sculpture.

http://www.youtube.com/watch?v=5c777kLwG4U

Even though there is a single entry point for the marbles, there are actually gates which allow the marbles to take one of three paths. You can imagine a much more complex machine of this type involving thousands of gates and millions of paths.

Would you say that this hypothetical gigantic machine is choosing the paths of the marbles?

Would you say that the machine with only three paths is choosing?

What if there were only two paths?

What if there were two paths, but the machine is set up in such a way that it never takes one of those paths?

If these machines are not making decisions, what is the essential difference between Big Blue's decision-making algorithms, and this machine's decision-making algorithms?

(If you believe this machine is making decisions, then I think there is probably a fundamental disagreement on the actual nature of "making a decision.")
Of course it is choosing.
The algorithm is simple and crude and hardly has anything to do with the complex kind of choices that humans are capable of making, but a choice is made.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 04:31 PM
Quote:
Originally Posted by lagdonk
So you think progress in neuroscience may eventually shed some light on the validity of this conception of yours that mental states are emergent and/or not completely physically determined, or is this not an empirical matter?
Not an empirical matter.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 04:57 PM
Quote:
Originally Posted by MrBlah
Of course it is choosing.
Is it choosing all the way down to a single gate?

Is it choosing if there is a single gate that is stuck in one position?

Quote:
The algorithm is simple and crude and hardly has anything to do with the complex kind of choices that humans are capable of making, but a choice is made.
I'm still trying to figure out the difference between the "choice" the machine is making and the "choice" that a falling rock is NOT making. I'm trying to take away the "complexity" issue and reduce the problem to its most basic form. What makes it a "choice"?
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 08:35 PM
Quote:
Originally Posted by durkadurka33
Not an empirical matter.
In this case, the research is promising but limited at this point. Searle's great contribution to psychology is pointing out that it was ignoring consciousness, and irritating a generation of psychologists enough that they started working on it.

The problem will always hold that scientific theories are models of reality, not reality itself. That sounds nitty, but it is very important.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 09:00 PM
Quote:
Originally Posted by durkadurka33
Deep Blue isn't responsible because it's determined.
This got glossed over. It is the main (only?) point that favors libertarianism over determinism that has been established here through argument. We have an intuitive sense of responsibility.* Free will is necessary for this concept (responsibility) to have teeth.

Quote:
How have I not answered this? It isn't "choosing" since it can't select from disjuncts of 'options': it will only ever select one of them (the output of the algorithm). It's not 'free' to 'choose' alternatives in the actual world.
Stop arguing assuming everyone is a compatibilist! Good rule of thumb, but at least give the benefit of doubt.

A decision tree is a method of making a certain decision. The decision tree is determined by definition. If you look closely, the word decision appears in the context of determinism. I bolded it. You are arguing definitions, I think (unless he is a compatibalist).

* I would argue, that this sense of responsibility is like the color blue. Blue itself doesn't technically exist. Just trust me on this if you are weak on perception theory.**

**I just used myself in an appeal to authority fallacy.
durkadurka, you only believe in free will because....(LC) Quote
06-16-2010 , 09:20 PM
Quote:
Originally Posted by Aaron W.
Again, time is seemingly plentiful.
Just as an example of what I would like to bring to the table on consciousness:

http://www.theassc.org/files/assc/ps...DITpage1-8.pdf

Don't read it all (unless you are really bored), just peruse for enjoyment's sake. It (that particular article) doesn't specifically play into my argument as anything more than background.

And that is just on perception, which is simple. The stuff on consciousness is way more dense.

This is fun for me, but a lot of work. It is kind of like trying to explain matrix calculus to a class of 2nd graders who just learned addition.

The, "this is fun" part holds as the important part of the previous paragraph.

The best I am probably capable of is general argument with further reading as links.
durkadurka, you only believe in free will because....(LC) Quote
06-17-2010 , 12:38 PM
Quote:
Originally Posted by durkadurka33
However, the Big Blue case seems importantly different. The computer could only select the utility maximizing option (and if there are equal options it will use a pseudorandomizing function). I'd like to reserve 'decision' for where there are genuine options...so let's not use that word.
This is just a straight admission that you are using your special definitions simply because you would like to reserve these words for special cases.

Quote:
Act? Big Blue 'acts' when it makes one move instead of another...but it could only have picked the one it was going to pick because it was determined to do so.
It picks out one course of action from a range of options. It chooses.

Quote:
Big Blue is the reason why Kasparov loses, but it's not morally responsible. So, it would only be 'responsible' in the loose sense...but not in the specific sense that we're using wrt free will and responsibility.
Again you basically admit your guilt here. You are not using the ordinary sense of "responsible," instead you are using your own special specific sense of the term.

But the compatibilists have never claimed that determinism allows responsibility in that sense. Only that determinism allows responsibility (in the "loose sense").
durkadurka, you only believe in free will because....(LC) Quote
06-17-2010 , 12:41 PM
Quote:
Originally Posted by durkadurka33
Ignorance and illusion are not sufficient for freedom and responsibility.
Picking one move from a range of moves is sufficient for choice and responsibility.

You yourself even (explicitly) acknowledged the latter in your above post. You admitted that Big Blue is responsible for Kasparov's loss.
durkadurka, you only believe in free will because....(LC) Quote
06-17-2010 , 12:55 PM
I agree that picking one from a range of options is sufficient for choice...but there's no 'range' in a deterministic system. That's been my argument.
durkadurka, you only believe in free will because....(LC) Quote
06-17-2010 , 12:56 PM
Quote:
Originally Posted by madnak
This is just a straight admission that you are using your special definitions simply because you would like to reserve these words for special cases.
Actually, no you're confusing effect for cause.
durkadurka, you only believe in free will because....(LC) Quote

      
m