Quote:
Originally Posted by David Sklansky
A utilitarian would not just consider the number of fatalities except in the almost impossible eventuality that there was no other information.
Meanwhile the trolley problem isn't even about that. It is about whether an individual is being unethical if he switches things so that the one group of deaths are less of a tragedy GOING UNDER THE ASSUMPTION THAT EVERYONE AGREES THAT ONE GROUP OF DEATHS IS INDEED MORE OF A TRAGEDY. Regardless of what philosophy you are using. That could even mean that the decision involves switching to a group with a higher number of people.
Given the above, I maintain that not switching to the less tragic result MUST be the less ethical choice for the non religious. To show that imagine a psychopath who you somehow know is not lying, tells you he will use the gun he is holding to either kill everyone in group A or everyone in group B depending on who you choose. If you don't choose, everyone dies. Reluctantly you choose the group whose deaths are unquestionably less tragic (eg 90 year old terminal cancer patients versus a bunch of children.)
But now change the problem slightly. Suppose he tells you that he had ALREADY DECIDED to kill the children but that he will SWITCH if and and only if you tell him too. There are non religious posters on this thread who are making a distinction between the scenarios and arguing that in the second case (but presumably not the first) you should say nothing merely because the outcome had been decided without your input. (Actually I'm sure that even most religious people would say such a stance is wrong.)
I agree with your first point. A utilitarian might argue that Einstein's life is worth more than the lives of 100 welfare recipients. I was simply focusing on the number of fatalities to avoid adding another layer of complexity.
As for your example, I do think there is a subtle difference between the two scenarios. I won't go so far as to draw any absolute conclusions, but the first scenario is basically an attempt at mitigating a tragedy that cannot be avoided. People are going to die, and you're just trying to minimize the loss.
In the second scenario, the issue lagtight brought up on the first page could be relevant.
To illustrate the point, suppose there are two groups of three people. We suppose the deaths of either group would be almost equally tragic, except the deaths of group A would be .00000001% more tragic.
Now suppose group A is hang-gliding, while group B is enjoying a picnic in a park. Group A is about to crash, and we are presented with the choice of using group B to break the fall, supposing this will certainly save group A while killing group B.
Now I realize this is sort of a dumb example, but isn't it relevant that group A has knowingly chosen to engage in a risky activity? If the deaths of either group are almost equally tragic, wouldn't you agree that group A should suffer the natural consequences of their own actions?
I think it would be wrong in this scenario for us as a third party to intervene to save group A at the expense of group B, even after accepting that the deaths of group A would be ever-so-slightly more tragic