Cliffs notes: argue whether Moore's law is valid and if strong AI is possible, not the eschatological stuff
Long version:
Kurzweil's criticisms are of a completely inappropriate magnitude.
Here's an example:
Quote:
Originally Posted by durkadurka33
Refute the flying spaghetti monster.
Same sort of thing.
There are several comments in this thread that imply that Kurzweil's predictions are as bad as Christian / New Age eschatology.
These criticisms are terrible.
Here is an example:
Quote:
Originally Posted by durkadurka33
Except that it's not falsifiable.
It will come at 2015.
2015 comes and goes...it'll happen at 2025.
2025 comes and goes...it'll happen at 2050...
etc.
He'd never stop making the prediction that it's "just around the corner" or whatever.
Hack.
This is a hypothetical ad hominem attack based on the guilt by association fallacy.
You're saying "Kurzweil looks like all the other 'doom sayers,' and false prophets readjust their dates to the future when they're proven wrong, so Kurzweil will do that too."
There are at least a dozen similar posts in this thread, including those which suggest Kurzweil is a "delusional liar," or that his claims belong at the "bottom of the 2012 slagheap."
Very little of this thread actually addresses the weak and obviously contestable points in his argument.
His argument is:
Step 1) Exponential growth will continue, and reach a point x in the future
Step 2) At point x, we can simulate brains
Step 3) Prophet
Durkadurka is one of the only critics to have addressed step 1 - he says that this is "a fallacy of extrapolation." Unfortunately, his argument ends there. (Hardball47 also makes this point, Princeofpokerstars says something similar).
He does not provide an explanation for why Moore's law has been a good predictor for decades after is formation, nor does he provide any theoretical basis for why Moore's law will end (some would argue "has ended").
There is no explanation for why Kurzweil can't extrapolate here.
Step 2 has been argued a few times in this thread, but again it is a minority of the criticisms.
Durkadurka brings this up:
Quote:
Originally Posted by durkadurka33
I think that it's possible that the human mind can do things that turing machines can't...you also have to recognize that his predictions depend on computational functionalism being true, which it may not be.
This is a very brave claim (that the mind can do things machines can't).
I am pretty sure it's a minority belief in neuroscience right now. Dualism is dead.
One poster brought up Searle, but there isn't an argument in this thread for why strong AI isn't possible. In fact, from what I can tell, only Durkadurka seems to explicitly stated that it might be impossible.
Step 3 is where the majority of criticism is directed. The most interesting criticism related to this step is the "I Have No Mouth But I Must Scream" style of argument - namely, that the outcomes of the singularity are gonna suck. It is however, a completely invalid argument (it's a strawman). Kurzweil's usage of the word singularity is meant to capture the unpredictable nature of the event (light doesn't escape a black hole, so you can't find out what's inside). His very argument is that we will not be able to predict the changes, just that the changes will be near-inconceivable. He does not predict a utopian transhumanist future - he predicts a very unpredictable future.
The distribution of argument doesn't make any sense from a logic point of view, and it is certainly inconsistent with how we examine other predictions of similar magnitude.
Predictions on such a macroscopic scope are not always cult-inspired garbage.
Global climate change alarmists, for instance, make extrapolations and present a mechanism to make a massive claim, ranging from "the arable land on Earth will change location, causing global warfare resulting from population movements following the resource reallocations" to "the planet will be flooded and engulfed in an endless series of super-disasters, along with global warfare and terrifying outbreaks of disease, famine, etc."
Although climate change is obviously a topic for another thread, I'd like to point out that the debate tends to a far more civil discourse over the assumptions of the mechanisms for climate change, and the extrapolations of relevant variables (CO2 emissions, etc).
Climate change is far from being the only monster-sized prediction. In political science, we see a spectrum of plausible scenarios ranging from Fukiyama's "The End of History," to Freidman's "The World is Flat." You won't see any legitimate intellectual dismiss Fukiyama's theory on its parallels to silly predictions, despite the book's shocking title.
I suspect this disconnect is due to Mitchell Kapor's quote on the singularity:
"It's intelligent design for the IQ 140 people. This proposition that we're heading to this point at which everything is going to be just unimaginably different - it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me, no matter what numbers he marshals in favor of it. He's very good at having a lot of curves that point up to the right."
This is an elegant and poignant piece of rhetoric, and I think it's responsible for a great deal of the anti-Kurzweil sentiment in the (armchair?) intellectual community. Kapor's criticism amounts to a position taken by a smart and successful man with no exceptional scientific or philosophical ability. His qualifications are roughly equivalent to Kurzweil's (elite undergraduate degree, success in industry).
Who can argue with someone who is willing to criticize the intelligence of people with an IQ over 140? Gee, he must have an IQ waaaay higher than that!