Open Side Menu Go to the Top
Register
"The Singularity Is Near" by Ray Kurzweil, How Close?? "The Singularity Is Near" by Ray Kurzweil, How Close??

08-21-2010 , 06:26 AM
Quote:
Originally Posted by RigMeARiver
Sorry, there's an opportunity cost of holding something inside one's brain?
IMO, lack of truth is an opportunity cost. Also, while the possibility of some form of nhilism may be unpalatable, many of the greatest minds (and many lesser) have been able to deal with it--one might even call doing so an advanced form of intellectual maturity.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-23-2010 , 04:20 AM
Quote:
Originally Posted by Plancer
Cliffs: There are very few valid arguments of the singularity present in this thread. The invalid arguments tend to be ad hominem attacks. Of the valid arguments, futurists have powerful arguments - mainly, modern AI doesn't have the same weaknesses as the old Strong AI. Our claim is that Searle's argument inadequately deals with the brain simulator reply. Also, we predict brain research will allow brain simulation without an understanding of the brain. Lastly, the critiques of transhumanism by MM / Rollos are both powerful and relevant.

The thread so far (in chronological order)
  1. Extrapolation makes this concept invalid. (hardball47, durkadruka, PrinceOfPokerstars)
  2. Kurzweil resembles an eschatological cult, and will be immune to evidence. (durkadurka)
  3. Kurzweil's nontech predictions are bad (econophile, Tom Crowley)
  4. Kurzweil is beyond absurd (jb9)
  5. Kurzweil is a product of survivorship bias (dd33)
  6. Kurzweil is a delusional liar with arguments worse than those given by creationists (TomCrowley)
  7. Kurzweil underestimates political, economic, and social developments. (one sentence itt. Seriously, one sentence)
  8. Kurzweil's ideas are as refutable as the FSM (dd33)
  9. There is a difference between computers and machines (dd33)
  10. Strong AI might be impossible. (dd33, jb9(?), river_tilt, hardball_47, vixticator)
  11. Understanding of the brain will delay the singularity (or make it impossible). (ctyri)
  12. Kurzweil's predictions had a different probability of occuring than his prediction of the singularity (Max Raker, MM).
  13. Futurists and science fiction writers tend to extrapolate existing technological trends too aggressively while missing unforeseen developments (MM)
  14. Cybernetic totalism has become too powerful of an ideology among the worlds technocrats, and it has terrifying consequences. (Jaytee, who linked an excellent essay)
  15. Kurzweil is friends with a homeopath and believes in whack-job medicine.
  16. The singularity is not desirable (Ironlaw, Rollos)
  17. The singularity is just Rapture for atheists (A_c_slater)
  18. Kurzweil is intellectually dishonest, as demonstrated by his charts (Jaytee)
  19. Futurists provide an intellectually lazy way of evading the human condition. (MM, Rollos, endorsed by me (Plancer))
  20. No one is qualified to predict technology, but people who lick Kurzweil's testicles make an exception for him. (dd33)
  21. Computational complexity will be a relevant hurdle to arriving at the singularity (Max Raker)
  22. Kurzweil's tech track record is terrible (Tom Cowley, Jaytee)
  23. The world's governments will prevent the singularity (Skeletori)
  24. Gains in AI aren't AI, technologism is faith based, and technologism will worsen the alienation the third world feels when faced with modernization. (econophile, brandx)
  25. PZ Myers doesn't like Kurzweil (dd33, jb9)

Obviously valid arguments against the singularity: 1,7,10,11, 21?(I disagree),23?
Valid arguments against futurist philosophy: 12,13,14,16,19

The Standard Reply to Rational Arguments:
1) (Exponentials) We have evidence that Moore's law has a few more doublings left (prototypes exist for transistors of appropriate size), allowing supercomputers to breach the singularity threshold with great ease. There are many avenues of research to move to a new substrate when we exhaust silicon. Very little discussion in this thread has been on the likelihood that Moore's law will continue, but it's a valid criticism.
7) (Political / Economic / Social changes)This argument is undeveloped from both camps.
10)(Strong AI) Searle's argument doesn't answer the brain simulator reply. Modern AI advocates are radically different from the Strong AI crowd that Searle was attacking - Searle was showing that the plan to bypass studying the brain and skipping straight to a conscious machine was suspicious. Modern AI advocates (among whom is Kurzweil) want to reverse engineer the brain. In short, a great deal of modern AI thinkers are looking to gains in neuroscience to progress their field.
11) (Understanding the brain is a major hurdle) Good point, but we disagree on how solvable this problem is. We expect gains in brain research to generate a connectome in our life times. We plan to brute force brain simulation using data acquired via imaging, without necessitating an intuitive understanding of how the brain works. Although I don't know if we've explicitly stated it, I think it is safe to say that futurists do not anticipate an understanding of the brain in our lifetimes, nor in any lifetime. We don't need to understand it, in the same sense that we don't need to understand the mathematics of complexity regarding networks in order to evolve them.
12) (the probability of Kurzweil being correct about the singularity is very different from him being correct about a technical development) Good point, but Kurzweil is the cheerleader, not the football team. He coined the term and mobilized interest, but the possibility of a singularity is independent of him.
13) (Futurists exaggerate existing trends and are blind to unexpected changes). This is true, but the basis of modern futurism is a prediction of exponential growth in IT, whereas past futurism expected the technologies of the elite to become mainstream (space travel) and aggressively expected automation to replace all forms of labor (an extrapolation of "profession x was automated, so all professions will be automated). Modern futurism's Achilles Heel is Moore's Law, not the social acceptance or economic viability of technology.
14) (Critiques of Cybernetics) This is an excellent point, but aside from a few posts by Rollos and MM, has largely not been discussed in this thread.
16) (The Singularity is not desirable) This may be true, but the word "singularity" captures the ambiguous nature of the event. Information doesn't escape a black hole, and hence we can't form judgements. The fact some people are predicting a transformative future with consequences we can't predict is adequate to form a critique of it, but it might be as powerless as a critique of the internet (a powerful disruptive technology which is fundamentally remolding human civilization and the balance of power between the 1st and 3rd world, but is beyond control).
19) (Futurism is a form of escapism, allowing you to evade the human condition) Good point.
21) (Complexity) Singularity-types don't predict a solution to computational complexity, and we don't have any developments contingent
23) (The singularity will be prevented) Interesting, but this area hasn't been discussed


Responses to less reasonable arguments
2) (Eschatological Comparisons) ad hominem predicated on something he didn't do yet
3) (Nontech predictions) irrelevant
4) (Kurzweil sucks, yo) ad hominem
5) (Survivorship bias) irrelevant (attacks Kurzweil, not idea) Furthermore, I (plancer), Karganeth, and a few others claim to have shown that his predictions are distinctly different from those which could be a product of survivorship bias. The crux of the argument is that survivorship bias is an artifact of the binomial theorem when the probability of a bernoulli trial's failure is high enough to be relevant. Karganeth, myself (Plancer), and a few others claim the bernoulli trials, when generated randomly, should have a probability that vanishingly small due to combinatorics (monkeys typing Shakespeare argument).
6) (Delusional liar) ad hominem, beyond the pale
8) (FSM) irrelevant groundless vitriol
9) (Difference between computers and machines) Brain simulator reply
15) (Kurzweil is bad at medicine) irrelevant (though true) ad hominem
17) (Rapture) Nu uh.
18) (Kurzweil's charts). You said that the charts could be constructed at any point in human history. Kurzweil would agree, because d/dx e^x = e^x. ZING.
20) (Testicle licking) We need the salt. I mean, uh, that's an irrelevant ad hominem that's beyond the pale.
22) (Tech predictions) irrelevant to singularity. That being said, this is VERY controversial.
24) (Gains in AI aren't gains in AI, technologism is faith based). The first is a true but irrelevant statement, and it does not apply to futurists. It applies to science journalists, who I'm pretty sure we all dislike. I think I refuted the second point.
25) (PZ Myers) I think I obliterated this argument.
Definitely the best post in the whole thread. The point is not to find out what is good or bad, but to try and figure out the WHY. Contrary to popular opinion, philosophy is not about finding the ultimate "TRUTH", but the WHY there is not a truth or "untruth".
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-23-2010 , 04:56 AM
Likewise, I think that singularity problem poses a question of "WHY" rather then "WHEN". If you think that the singularity is going to come no matter what, I pose to you the question "WHAT" it would improve and "WHY". This is the point where futurists fail, because they automatically assume that technological advancement=good. But yet they seem too to assume what intelligence is, when philosophy (And CS too, miserably) can't even find a firm grasp of it.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-23-2010 , 05:13 AM
http://forumserver.twoplustwo.com/47...-world-201925/ This is the kind of "reporting" "philosophers" should be repusled by.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-23-2010 , 11:37 AM
Quote:
Originally Posted by Rollos
Likewise, I think that singularity problem poses a question of "WHY" rather then "WHEN". If you think that the singularity is going to come no matter what, I pose to you the question "WHAT" it would improve and "WHY". This is the point where futurists fail, because they automatically assume that technological advancement=good. But yet they seem too to assume what intelligence is, when philosophy (And CS too, miserably) can't even find a firm grasp of it.
why? because we really want this stuff, economic demand is ultra-massive

what for/ same thing really - to satisify the above.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-23-2010 , 12:16 PM
Quote:
Originally Posted by durkadurka33
Cliff:
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 03:32 AM
Nice summary Rollos, thanks. Let me add this possibility to the discussion:

26. The Singularity is irrelevant, because technological civilization will collapse long before we get there.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 12:25 PM
Quote:
We have evidence that Moore's law has a few more doublings left (prototypes exist for transistors of appropriate size), allowing supercomputers to breach the singularity threshold with great ease.
This is the heart of one of Kurzweil's and futurists' errors.

You do not know what the singularity threshold is. Kurzweil rather erroneously assumes that there will be ample computing power to simulate millions of brains simultaneously, so we'll have bundles and oodles of power and it'll be jazz. But this thinking is so sloppy.

Brains are not just simply connecting, firing neurons. They are vast assemblies of interconnecting neurons that change their own architecture in response to learning, in hundreds of ways that we simply don't understand. Intelligence in an unchanging hardware may well be an NP type problem - impossible no matter how much computational power is available. What we will likely be left with in creating an intelligence is not a problem of raw power - but an extremely complex and messy engineering problem that will takes many decades if not centuries. And we will run into many of the same grind-to-a-halt and exponentially increasing complexity problems that we run into in software.

It's worth going back to the absurdly optimist predictions of the 50s and 60s. The very intelligent and technically capable people back in those days look like fools. It is quite obvious to see that we won't have a "singularity" in Kurzweil's lifetime, and probably not in ours.

Also, I submit that Moore's law (and many other exponentials) have been driven in part by exponentially increasing population, which has driven exponentially increasing economies, helped greatly by many low hanging fruit. That will come to an end shortly. The importance of individual breakthroughs and the extent of paradigm shifts in many fields has declined each decade since the 50s.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 01:57 PM
Quote:
Originally Posted by ManaLoco
Brains are not just simply connecting, firing neurons. They are vast assemblies of interconnecting neurons that change their own architecture in response to learning, in hundreds of ways that we simply don't understand. Intelligence in an unchanging hardware may well be an NP type problem - impossible no matter how much computational power is available. What we will likely be left with in creating an intelligence is not a problem of raw power - but an extremely complex and messy engineering problem that will takes many decades if not centuries. And we will run into many of the same grind-to-a-halt and exponentially increasing complexity problems that we run into in software.
pshaw... all you futurists are alike.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 05:09 PM
Quote:
Originally Posted by luckyme
pshaw... all you futurists are alike.
Not sure if you are kidding. I think his statement is entirely accurate.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 06:11 PM
Quote:
Originally Posted by ManaLoco
. Intelligence in an unchanging hardware may well be an NP type problem - impossible no matter how much computational power is available.
I take serious issue with the bolded sentence. The language of complexity doesn't have a place here (we are not seeking any solutions, hence, P / NP / etc don't apply). If you were using it for rhetorical purposes to illustrate that some problems can't be solved by exponential growth, then I will point out there is no biological evidence that brain simulation is one of those problems. This may be due to a lack of understanding of the brain.

Here is a biological scenario which would be impossible to model:
1) Every neuron has an internal state which from one time step to the next is decided by the status of every other neuron (and its previous internal state).

Again, there is no evidence that this is the case. It appears that neurons' internal states are usually decided by the results of 1-10,000+ neurons, which is a small enough number that there will be no prohibitive scaling problems.

There is a fundamental difference between simulating and solving.

Quote:
Originally Posted by ManaLoco
...
Also, I submit that Moore's law (and many other exponentials) have been driven in part by exponentially increasing population, which has driven exponentially increasing economies, helped greatly by many low hanging fruit. That will come to an end shortly. The importance of individual breakthroughs and the extent of paradigm shifts in many fields has declined each decade since the 50s.
I agree, and I think that this might be one the greatest barriers to the singularity.

It is plausible to me that humanity will reach a technological peak in a few decades, simply because Earth's population will shrink to a point where it is unable to have a large enough pool of human capital. Earth's population has had a negative second derivative for about a decade now, and the demographics of the first world point to a serious talent decline. We may be in serious danger of having a small enough population of engineers / scientists / etc that we will be unable to afford to have enough specialists to continue advancing fields where we've already harvested the low hanging fruit.

A serious issue is simply aging - many engineering industries (chemical, electrical) are having talent shortages simply because the boomers are getting too old to work, or to want to work.

One obvious problem a lot of futurists overlook is that medical advancement doesn't provide some sort of mysterious boon to humanity's production possibility curve, nor does it make a long term change in the death rate - it just unlocks the next disease (which will be an expensive degenerative disease) and turns it into the primary killer (think - it's Kirchoff's law for death - cohort in must equal cohort out if we're in equilibrium). So when we cure cancer, stroke, and heart disease, we will see hundreds of millions of 80 year olds suffering from neurodegenerative diseases. Ironically, cancer could cost us more when cured than while we are researching it. And for those of you offended that I'd be willing to describe a cure for one of the most horrible diseases so callously, let me rephrase it as "after cancer is cured, we may never explore space."
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 06:21 PM
Quote:
Originally Posted by simplicitus
Not sure if you are kidding. I think his statement is entirely accurate.
That's allowed.
I'm sure there is some technical reason why a computer will never beat a world chess master.
But I wasn't kidding. All futurists attempt to predict the future and dealing with unknown unknowns and claiming they do know.
If " may well be " + " likely be left " adds up to
"we will run into many.." then you're on solid ground.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 06:26 PM
Quote:
Originally Posted by Plancer
It is plausible to me that humanity will reach a technological peak in a few decades, simply because Earth's population will shrink to a point where it is unable to have a large enough pool of human capital. Earth's population has had a negative second derivative for about a decade now, and the demographics of the first world point to a serious talent decline. We may be in serious danger of having a small enough population of engineers / scientists / etc that we will be unable to afford to have enough specialists to continue advancing fields where we've already harvested the low hanging fruit.

A serious issue is simply aging - many engineering industries (chemical, electrical) are having talent shortages simply because the boomers are getting too old to work, or to want to work.
I am not worried about lack of talent or technological progress at all. I am particularly not worried about the population of the earth shrinking.

Quote:
Originally Posted by Plancer
One obvious problem a lot of futurists overlook is that medical advancement doesn't provide some sort of mysterious boon to humanity's production possibility curve, nor does it make a long term change in the death rate - it just unlocks the next disease (which will be an expensive degenerative disease) and turns it into the primary killer (think - it's Kirchoff's law for death - cohort in must equal cohort out if we're in equilibrium). So when we cure cancer, stroke, and heart disease, we will see hundreds of millions of 80 year olds suffering from neurodegenerative diseases. Ironically, cancer could cost us more when cured than while we are researching it. And for those of you offended that I'd be willing to describe a cure for one of the most horrible diseases so callously, let me rephrase it as "after cancer is cured, we may never explore space."
However, I think this is very perceptive and you are basically correct here.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 06:56 PM
Quote:
Originally Posted by luckyme
That's allowed.
I'm sure there is some technical reason why a computer will never beat a world chess master.
what?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-26-2010 , 06:58 PM
What some of you are talking about is the diminishing returns on complexity problem, which Joseph Tainter considers the primary cause of civilizational failure. Basically you have to expend more and more resources just to maintain your increasingly complex civilization, until finally the whole thing breaks.

There's actually a pretty strong case that we're in the diminishing returns phase of our civilization, not the exponential growth phase (measured by, for example, the rate of inventions, which peaked something like a century ago). Kurzweil's worship of exponentials is pretty silly anyway, because in nature exponential curves either level off (best case) or they crash. Infinite exponential growth doesn't actually lead to a singularity mathematically (you need hyper-exponential growth for that), but either way the idea is a fantasy that doesn't exist in nature.

Last edited by mistergrinch; 08-26-2010 at 07:08 PM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-19-2010 , 11:59 PM
Poll: how many of you believe there will be sentient AI by 2040 (added ten years to the optimistic date of 2030)?

I think I do. My sentience may very well be some kind of 'fallacy', not necessarily so worth celebrating as it feels.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-20-2010 , 01:11 AM
Everyone will know when the singularity happens: There will be no more posts in RGT, only SMP. So in some sense this will be a welcome event. Oh yeah, everyone also gets a cookie.

-Zeno
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-20-2010 , 02:35 AM
I don't 'believe', well, anything really, but I'm optimistic. Knowing how the brain is not hardwired to understand such things as exponential progress, I tend to assume that the correct way to lean is to expect more than what most people assume seems reasonable when it comes to technological advancement.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-20-2010 , 05:19 AM
Quote:
Originally Posted by RigMeARiver
I don't 'believe', well, anything really, but I'm optimistic. Knowing how the brain is not hardwired to understand such things as exponential progress, I tend to assume that the correct way to lean is to expect more than what most people assume seems reasonable when it comes to technological advancement.
Some quotes from the early days:

Code:
1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."[60]

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."

1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."[63]
The history of AI research is way, way on the side of people being over-optimistic. What's more, the history of understanding of the complexity of the brain (or anything, really) is way, way on the side of people underestimating the complexity.

Kurzweil himself is starting to lose credibility as his longer term predictions are coming into scope. For example, see his 2009 predictions from The Age of Spiritual Machines (1999). The non AI ones are quite good - for example, he got some of these close to right, although he's still 5-10 years out for some (from Wikipedia):

* Computers are primarily portable, with people typically having at least a dozen on or around their bodies, networked together with "body LANs"
* Rotating memory (CD-ROMS, Hard disk drives) are on their way out
* The majority of text is generated with speech recognition software
* Learning at a distance, through computers, is commonplace
* Computer-controlled orthopedic devices, "walking machines" are used to help the disabled
* Translating telephones (where each caller is speaking a different language) are commonplace

* Virtually all communication is digital and encrypted (WRONG:http://xkcd.com/802/)
* The ten years leading up to 2009 have seen continuous economic expansion
* Most purchases of books, videos and music are digital downloads
* Warfare is dominated by unmanned intelligent airborne devices
* Tele-medicine is widely used, where the physician examines the patient at a distance with virtual reality


Notice a trend? Kurweil is right (or getting close) about uninteresting, non contentious aspects of technology (such as new drive technologies, miniaturization, involvement in everyday life, uptake of digital media, etc), but dead wrong about things that involve overcoming complexity or the beginnings of AI.

The red I've highlighted are predictions where he's failed. And they're ones that involve advances in AI.

Last edited by ManaLoco; 10-20-2010 at 05:28 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-21-2010 , 11:41 AM
Quote:
Originally Posted by ManaLoco
Some quotes from the early days:

Code:
1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."[60]

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."

1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."[63]
The history of AI research is way, way on the side of people being over-optimistic. What's more, the history of understanding of the complexity of the brain (or anything, really) is way, way on the side of people underestimating the complexity.

Kurzweil himself is starting to lose credibility as his longer term predictions are coming into scope. For example, see his 2009 predictions from The Age of Spiritual Machines (1999). The non AI ones are quite good - for example, he got some of these close to right, although he's still 5-10 years out for some (from Wikipedia):

* Computers are primarily portable, with people typically having at least a dozen on or around their bodies, networked together with "body LANs"
* Rotating memory (CD-ROMS, Hard disk drives) are on their way out
* The majority of text is generated with speech recognition software
* Learning at a distance, through computers, is commonplace
* Computer-controlled orthopedic devices, "walking machines" are used to help the disabled
* Translating telephones (where each caller is speaking a different language) are commonplace

* Virtually all communication is digital and encrypted (WRONG:http://xkcd.com/802/)
* The ten years leading up to 2009 have seen continuous economic expansion
* Most purchases of books, videos and music are digital downloads
* Warfare is dominated by unmanned intelligent airborne devices
* Tele-medicine is widely used, where the physician examines the patient at a distance with virtual reality


Notice a trend? Kurweil is right (or getting close) about uninteresting, non contentious aspects of technology (such as new drive technologies, miniaturization, involvement in everyday life, uptake of digital media, etc), but dead wrong about things that involve overcoming complexity or the beginnings of AI.

The red I've highlighted are predictions where he's failed. And they're ones that involve advances in AI.
I wouldn't be so quick to dismiss his prediction about warfare being dominated by unmanned intelligence. Just because an Abrams tank, F-22 Raptor, or the Apache Longbow might have some soldiers in it, unlike a standard UAV. The AI might be and probably is much more advanced... Please double check.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-21-2010 , 12:19 PM
Quote:
Originally Posted by IrOnLaW
I wouldn't be so quick to dismiss his prediction about warfare being dominated by unmanned intelligence. Just because an Abrams tank, F-22 Raptor, or the Apache Longbow might have some soldiers in it, unlike a standard UAV. The AI might be and probably is much more advanced... Please double check.
I'm becoming convinced that the only reason anybody takes Kurzweil seriously as a futurist is because the human race has an amazing capability to think it heard what it currently wants to think it heard. Kurzweil didn't say "unmanned intelligence", he said "unmanned intelligent AIRBORNE devices." And it's a huge stretch to call a Predator intelligent since it's basically just remote-controlled and, afaik, doesn't change course or shoot anything without direct human instruction. It was an absolutely stupid prediction for reasons already ITT.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-25-2010 , 06:58 AM
I think it seems clear we might be better off to be the smart ones vs. machines. Obviously lots of reasons but these seem like the big 2 to me.

1. Just because we know something to be true doesn't mean humans in their present state are smart or wise enough to act on this information. Examples like CFC possibly causing global warning and holes in the ozone but us not changing our ways abound. We know blowing each other up isn't wise, but wars continue. Cigarettes are bad, but look how many smoke. The list goes on. Some computers ai system might have the answers to our future problems, but we might not have the collective will power or intelligence to do much about it.

2. If the machines have the power i find it hard to believe that between the day that happened and the rest of eternity, they wouldn't decide to get rid of us pesky humans.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-25-2010 , 07:57 AM
Whoa I read the full thread in one session.

5 stars.

We need moar of these.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-27-2010 , 01:22 AM
Hey Ray, I know you've read this thread, so here's a prediction for ya: The human brain will never be simulated. Ever.

We'll have human clones far before we get even in the remote area of brain simulations.

Get over it. No, no. That last Terminator movie is only fiction; you can't hook brains into computers, nor will you ever be able to. Sorry folks, but you can't escape your mortality. See: cold hard truth.

Keep fighting the good fight, though. You should write a fiction novel and try to get a movie adaptation; it'll let you do something productive with that imagination.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote

      
m