Open Side Menu Go to the Top
Register
"The Singularity Is Near" by Ray Kurzweil, How Close?? "The Singularity Is Near" by Ray Kurzweil, How Close??

08-03-2010 , 09:13 AM
Quote:
Originally Posted by Max Raker
Nobody has given any reasons why the singularity is remotely possible other than "cuz Kurzweil said". If that is all it takes for you to ramp up a probability from 0 to 1 obv people are going to laugh at you. If you think the idea has merits independent of kurzweil, you can argue them. But if not you have just elevated him to a status on par with cult leaders while making some pretty basic probability errors to boot.
It's not a reason to laugh at someone. Believing it because he said it is not unreasonable. Not everyone has the time to independently verify every prediction they hear. Would you laugh at someone for believing a report by top scientists that the chances of an asteroid in the next 50 years was 1 in 5? Do you think he should only believe it once he gets out his telecope and calculates the path of the asteroid and earth?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 09:54 AM
Quote:
Originally Posted by Micturition Man
And one short note: this idea that because of geometric technological advancement the future is "becoming unpredictable" seems like a fallacy to me.

The future has *always* been unpredictable.
The ability of humans to predict the future isn't always the same. At 1000 AD, things we're not really progressing at all. Little changed over 100 years, it was not particularly difficult to predict this. Now think about the cold war. Nobody could even predict if most of humanity would be alive in 10 years.

Quote:
Originally Posted by Micturition Man
Also for what it's worth futurists and science fiction writers, though they tend to miss the unforeseeable major technological breakthroughs of the future, also tend to extrapolate far too aggressively from the known technological breakthroughs of their recent past. (For example consult any 50's sci-fi writer on the state of space travel by the year 2010.)

Kurtzweil feels like he's operating in this vein to me.
Please don't compare Kurzweil's predictions to those of science fiction writers. Science fiction writers are people who have very active imaginations, so they are likely to exaggerate how far we will have progressed. And you should note that Kurzweil has devoted much of his life to researching into future trends whereas most science fiction writers have done nothing to validate their predictions.

I'm surprised at the number of people who think that the brain has some non physical properties that make it impossible to simulate. Do people really believe that the brain has some magical properties which mean that a simulated a brain (scanning a brain into a computer using an imaging device then simulating all the particles with a physics engine) would be less intelligent than a real brain?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 10:12 AM
Quote:
Originally Posted by Karganeth
I'm surprised at the number of people who think that the brain has some non physical properties that make it impossible to simulate. Do people really believe that the brain has some magical properties which mean that a simulated a brain (scanning a brain into a computer using an imaging device then simulating all the particles with a physics engine) would be less intelligent than a real brain?
I think we are going in the right direction in this matter, I think the accpetance of the probable fact that the brain could be simulated is growing. The next step I think is to "forget" about the brain and try to do something better for the purposes we want, maybe the "singular" computers finally could do the calculations for how to get fusion to work, stop unfavorable climate change, think out how to get genetically superior plants to get rid of famine, how to cure diseases, how space travel really can be done, and so on. Humans could live comfortable and happy lives, letting the machines do the superadvanced development things. Boring? Not necessarily. We could still chat here, maybe about some eaven more interesting things than now. "Have you heard the news about what the computers stu and ive have been up to lately?"

Last edited by plaaynde; 08-03-2010 at 10:31 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 12:48 PM
Quote:
Originally Posted by Karganeth
I'm surprised at the number of people who think that the brain has some non physical properties that make it impossible to simulate.
I don't think anyone has suggested that the brain has non physical properties. This is SMP, not RGT.

I think what many people doubt is that consciousness is something as simple as finding the right configuration of simulated neurons. Part of what keeps me doubting the likelihood that brute force computational power will result in sentient machines is that human consciousness did not evolve form some overload of mental powers.

Speaking generally, because I consider the issue to be far from clear or proven:

Consciousness seems most likely to be an emergent property of life that evolved and provided an advantage to species that possessed it. Whatever it is, it would seem mammals have it. Maybe birds. All vertebrates possibly to some degree. Maybe cephalopods.

Do bacteria? Worms? Jellyfish? Starfish? Not so sure. Don't think so.

In any event, I don't think the prerequisite is massive amounts of computational power. I think it is something else.

Not magical or mystical. But structural and emergent.

And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).

I think what might happen is that we create these supercomputers that we think have all the tools they need to generate consciousness, but nothing comes of it.

You can have all the parts you need for an airplane, but that doesn't mean you have a machine that can fly unless and until someone figures out how to put them together properly.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 01:00 PM
Quote:
Originally Posted by jb9
And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).
Why couldn´t the computers simulate an evolutionary environment, simultating it real fast, for example 1 mirrion years of evolution in 1 hour? Maybe they can take for example 10^20 factors into account, maybe even do a better job than nature, which has fumbled essentially blindly? All the factors that are known to speed up evolution can be programmed. If the simulation detects something that appears to be near the consciousness of a fish, let it go on from there and have the 'singularity-power' program work on it, and do the 10^X 'evolutionary simulations'.

This is only one potential way to tackle the consciousness problem, btw.

Last edited by plaaynde; 08-03-2010 at 01:28 PM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 01:45 PM
Quote:
Originally Posted by plaaynde
Why couldn´t the computers simulate an evolutionary environment, simultating it real fast,
I suppose something like that could be developed, but I think machine intelligence may have to follow a different developmental path than biological intelligence.

Biological beings started with urges (eat, reproduce) and rudimentary sensory input (light, temperature, chemical detection) and in the pursuit of these urges evolved intelligence and consciousness.

Machines will start with some intelligence and sensory input and need to develop urges and consciousness.

Possibly some type of simulated competitive environment would encourage those developments -- although we should probably be careful what primal urges we program them with...
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:02 PM
Quote:
Originally Posted by jb9
I don't think anyone has suggested that the brain has non physical properties. This is SMP, not RGT.

I think what many people doubt is that consciousness is something as simple as finding the right configuration of simulated neurons. Part of what keeps me doubting the likelihood that brute force computational power will result in sentient machines is that human consciousness did not evolve form some overload of mental powers.

Speaking generally, because I consider the issue to be far from clear or proven:

Consciousness seems most likely to be an emergent property of life that evolved and provided an advantage to species that possessed it. Whatever it is, it would seem mammals have it. Maybe birds. All vertebrates possibly to some degree. Maybe cephalopods.

Do bacteria? Worms? Jellyfish? Starfish? Not so sure. Don't think so.

In any event, I don't think the prerequisite is massive amounts of computational power. I think it is something else.

Not magical or mystical. But structural and emergent.

And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).

I think what might happen is that we create these supercomputers that we think have all the tools they need to generate consciousness, but nothing comes of it.

You can have all the parts you need for an airplane, but that doesn't mean you have a machine that can fly unless and until someone figures out how to put them together properly.
I can't see how consciousness is relevant. If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human. It doesn't matter if it's conscious or not becaues either way it'd still make the same decisions because all the particles in the brain would be simulated in the exact same way. Consciousness is not some magical force that would change the path of some particles.

I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:13 PM
Quote:
Originally Posted by Karganeth
If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human.
Why? Do identical twins make all the same decisions?

And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).

Also, has it been proven that cognition and/or consciousness is entirely in the brain and functions without sensory input or supporting apparatus (e.g., endocrine system)?

The 'just scan the brain' idea sounds a bit too Star Trekky to me.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:14 PM
Quote:
Originally Posted by Karganeth
I can't see how consciousness is relevant. If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human. It doesn't matter if it's conscious or not becaues either way it'd still make the same decisions because all the particles in the brain would be simulated in the exact same way. Consciousness is not some magical force that would change the path of some particles.

I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
Then you don't understand Kurzweil's position. He thinks that such a thing WOULD be conscious. That's the definition of "strong AI": thinking/consciousness is just sufficiently sophisticated computation. The issue is not whether we could build a non-thinking automaton...it's whether we could construct an artificial intelligence.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:15 PM
Quote:
Originally Posted by Karganeth
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
Weak example. Magnets are naturally occurring objects and fairly simple.

Try it this way:

You don't have to understand how a car is made to drive one, but you sure as heck need to know how a car is made to make one.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:35 PM
Quote:
Originally Posted by Karganeth
I can't see how consciousness is relevant. If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human. It doesn't matter if it's conscious or not becaues either way it'd still make the same decisions because all the particles in the brain would be simulated in the exact same way. Consciousness is not some magical force that would change the path of some particles.

I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
This is my exact view.
I think that advances in neuroimaging and developmental biology are going to be the bottlenecks for AI research (assuming Moore's law holds). I think we can block-box-brute-force if we can collect enough data on the organization of the brain / characteristics of neurons + glia, and we can ignore the hard problem of neuroscience (and dodge issues of complexity / emergence). This is a dramatic departure from the Strong AIers paradigm of 70s-80s, who asserted we could get create consciousness without understanding the biology at all - it appears Kargenath and I intend for the opposite to happen (consciousness without requiring an understanding of emergence).

I think a lot of the controversy over this topic exists because of an unusual extension of this argument - namely, that any sort of computer (including silly computers, like systems of pipes) could run this brain-inspired simulation. It necessitates the following statement: "the computational device running the consciousness program is not special - it could be a brain, a series of tubes, etc."

I want to point out that this is not jb9's argument (although it has been brought up in the thread, see references to Searle).
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:46 PM
Quote:
Originally Posted by jb9
Why? Do identical twins make all the same decisions?

And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).

Also, has it been proven that cognition and/or consciousness is entirely in the brain and functions without sensory input or supporting apparatus (e.g., endocrine system)?

The 'just scan the brain' idea sounds a bit too Star Trekky to me.
I seriously doubt it's impossible to scan the brain to such detail.

With regard to the uncertainty principle - yes, this would matter if you wanted a replica which had every particle in an identical position, with an identical amount of energy, with identical momentums, etc. The uncertainty principle might be a decent explanation for why you can't have a total duplicate of a person (including his or her internal state of mind at the moment).
After that, I doubt it'd be important. You can resolve the location of neurotransmitters, the apparatuses making them, the location of synapses, etc etc etc. Considering that there is thermal / mechanical noise perturbing all of those systems constantly, yet they still work, I doubt that error on the order of Planck's constant is going to matter.
You might be able to make a duplicate of a person including memories and other science fictiony things. This is unnecessary - we're mainly concerned with the connectivity of the brain.

We don't even have to start at the human-level. We could begin with some very simple models, and possibly use developmental neurobiology to bridge the gap to bigger, brainier beasts.

Edit: forgot to address endocrine system / sensory data.
How is this any different from simulating neurons? If it matters, it's just an extra technical problem with an identical solution. We'll need to image the tiny adrenaline glands and simulate them too - no big deal.
Frankly, I'd expect the peripheral nervous system / sensation data, etc to be a cakewalk to simulate relative to any structure higher than the spinal cord. Almost all the endocrine system is the brain anyway...things like simulating diffusion / destruction of relevant hormones is so easy to simulate that we have decent models right now which require a comically high low level of resolution. (you can get highly accurate models using just simple pharmacokinetic differential equation models)
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:54 PM
Quote:
Originally Posted by Plancer
I seriously doubt it's impossible to scan the brain to such detail.
Obviously depends on what level of detail 'matters'. If all you need is cellular, it seems theoretically possible. Molecular? Maybe. Quantum? No. Cellular + energy states? I don't know.

P.S. And for what it is worth, I think the 'scanning the human brain/body' approach is the wrong one for creating machine intelligence. I think any machine intelligence will have its own architecture and developmental path. But this is just gut feeling, not science or philosophy.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 02:59 PM
Quote:
Originally Posted by jb9
Why? Do identical twins make all the same decisions?

And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).
Identical twins are not identical, they are only similiar. A scanned brain would be completely identical. Anyway, I think that you missed my point entirely. It may indeed be impossible, it was merely hypothetical. My point was that if a computer simulation of a brain was not conscious it would still act the same as a human would - not being conscious would not change its output.

Quote:
Try it this way:

You don't have to understand how a car is made to drive one, but you sure as heck need to know how a car is made to make one.
You missed the most important part of your analogy. You do not need to know how a car works to make one. You just need to follow the instructions on the sheet on paper of where the parts go. Before you jump in and say "but someone needs to know how a car works to build one" I would like to point out that this is where the analogy breaks down. No one understands how consciousness works and yet we have it. We do need to understand how to make a consciousness to make one, but we don't need to understand how consciousness works.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:02 PM
Quote:
Originally Posted by Karganeth
We do need to understand how to make a brain to make one, but we don't need to understand how the brain works.
AI isn't about making a brain, it's making something that works like a brain.

So, yeah, knowing how a brain works is of some importance, imo.

(and we already know how to make brains... have sex.)
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:02 PM
Quote:
Originally Posted by durkadurka33
Then you don't understand Kurzweil's position. He thinks that such a thing WOULD be conscious. That's the definition of "strong AI": thinking/consciousness is just sufficiently sophisticated computation. The issue is not whether we could build a non-thinking automaton...it's whether we could construct an artificial intelligence.
Searle's argument, when applied to this discussion results in comical dogmatic responses.
Here is an example of how absurd it gets:

Searle addresses the following scenario with the silliest conclusion:
Imagine a robot with an electronic brain which is a perfect simulation of the physical neurons / synapses of the human nervous system. Give the robot a human-looking body which is indistinguishable from you or me, and has all the physical capacities of a human to interact with the world. When you speak with this robot, it appears identical in every way to a conscious human - and it passes every measurable test of consciousness identically to a human would. If you didn't know it was a robot, you'd claim it's conscious.

Is this conscious? Searle says 'no!'
He says that even though we'd say it's conscious if we didn't know any better, because we know it has an electronic brain, we know it's not conscious, because gee whiz, I'm Searle, and I'm allowed to say that.

Read his response to the "brain simulator reply" and the "combined response" (which addresses the robot indistinguishable from humans).
Also note how he points out that Strong AIers claim they can understand AI without understanding biology, and this requires biology.

He designed his argument to show that strong AIers who claim they can "generate consciousness without understanding how the brain works" were wrong, and rails against 'special dualists' who refuse to believe that the physical / chemical properties of the brain are important for consciousness.

His later arguments (post 1980) are absurd meaningless appeals to linguistic aesthetics - making statements like "you can't simulate burning!" He is a genius dinosaur who made a compelling argument against a topic we are not discussing.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:07 PM
Quote:
Originally Posted by ctyri
AI isn't about making a brain, it's making something that works like a brain.

So, yeah, knowing how a brain works is of some importance, imo.
Obviously, from a practical standpoint, you're right - it would likely be excellent to know how a brain works to make one.

From a necessary versus sufficient standpoint we can bypass understanding. This lets us substitute a hard, math-intensive, thinky problem with simple data collection, which biologists are good at.

Just ask the Chinese pirates manufacturing cloned networking devices whether they understand queuing theory.

Edit: I posted this before I saw Karganeth's reply, but I'm quoting it for truth:
Quote:
Originally Posted by Karganeth
...
You missed the most important part of your analogy. You do not need to know how a car works to make one....
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:18 PM
Pointing to somebody making a car by looking at a car or its blueprints doesn't make your argument valid. Nobody is proposing making a replica of a brain by looking at one, and its blueprints do not exist.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:21 PM
Quote:
Originally Posted by jb9
Obviously depends on what level of detail 'matters'. If all you need is cellular, it seems theoretically possible. Molecular? Maybe. Quantum? No. Cellular + energy states? I don't know.

P.S. And for what it is worth, I think the 'scanning the human brain/body' approach is the wrong one for creating machine intelligence. I think any machine intelligence will have its own architecture and developmental path. But this is just gut feeling, not science or philosophy.
We don't need to collect the relevant super-detailed information at the same time as we perform the brain scanning.

For instance, suppose for some reason a highly detailed characterization of the sodium/potassium pump is necessary. If our brain scan could show us a distribution of this, but provide no further data, we could simply detail the pump independently and substitute the data in.

Of course, it may be that a black-box model of the pump is sufficient (that is to say, there is no error incurred from substituting a simple mathematical characterization for the original molecular-details).

So yes, I wanna pump dat black-box wit' my salt-y fluids.

I realize that a lot of this data doesn't exist yet (for instance, the 3D protein structure of the pump is unknown), but it seems inconceivable to me that we won't know it in a few decades, by hook or by crook (aka measurement or simulation).

It may sound stupid, but a fairly blind "substitute this structure with a response-curve generated from inputs" may be able to solve all of these problems without a deep understanding of the underlying mechanisms.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 03:41 PM
Quote:
Originally Posted by ctyri
Pointing to somebody making a car by looking at a car or its blueprints doesn't make your argument valid. Nobody is proposing making a replica of a brain by looking at one, and its blueprints do not exist.
Actually, that's exactly what rk is proposing.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 04:56 PM
Quote:
Originally Posted by JammyDodga
Actually, that's exactly what rk is proposing.
From Kurzweil Himself:

Quote:
ARTIFICIAL INTELLIGENCE & REVERSE ENGINEERING THE BRAIN

RK: I‘m working on a book called How the Mind Works and How to Build One. It‘s mostly about the brain, but the reason I call it the mind rather than the brain is to bring in these issues of consciousness. A brain becomes a mind as it melds with its own body, and in fact, our sort of circle of identity goes beyond our body. We certainly interact with our environment. It‘s not a clear distinction between who we are and what we are not.

My concept of the value of reverse engineering the human brain is not that you just simulate a brain in a sort of mechanistic way, without trying to understand what is going on. David Chalmers says he doesn‘t think this is a fruitful direction. And I would agree that just simulating a brain... mindlessly, so to speak... that‘s not going to get you far enough. The purpose of reverse engineering the human brain is to understand the basic principles of intelligence.

Once you have a simulation working, you can start modifying things. Certain things may not matter. Some things may be very critical. So you learn what‘s important. You learn the basic principles by which the human brain handles hierarchies and variance, properties of patterns, high-level features and so on. And it appears that the neocortex has this very uniform structure. If we learn those principles, we can then engineer them and amplify them and focus on them. That‘s engineering.

Now, a big evolutionary innovation with **** sapiens is that we have a bigger forehead so that we could fit a larger cortex. But it‘s still quite limited. And it‘s running on the substrate that transmits information from one part of the brain to another at a few hundred feet per second, which is a million times slower than electronics. The intra-neural connections compute at about 100 or 200 calculations per second, which is somewhere between 1,000,000 to 10,000,000 times slower than electronics. So if we can get past the substrates, we don‘t have to settle for a billion of these recognizers. We could have a trillion of them, or a thousand trillion. We could have many more hierarchal levels. We can design it to solve more difficult problems.

So that‘s really the purpose of reverse engineering the human brain. But there are other benefits. We‘ll get more insight into ourselves. We‘ll have better means for fixing the brain. Right now, we have vague psychiatric models as to what‘s going on in the brain of someone with bipolar disease or schizophrenia. We‘ll not only understand human function, we‘ll understand human dysfunction. We‘ll have better means of fixing those problems. And moreover we‘ll “fix” the limitation that we all have in thinking in a very small, slow, fairly-messily organized substrate. Of course, we have to be careful. What might seem like just a messy arbitrary complexity that evolution put in may in fact be part of some real principle that we don‘t understand at first.

I‘m not saying this is simple. But on the other hand, I had this debate with John Horgan, who wrote a critical article about my views in IEEE Spectrum. Horgan says that we would need trillions of lines of code to emulate the brain and that‘s far beyond the complexity that we‘ve shown we can handle in our software. The most sophisticated software programs are only tens of millions of lines of code. But that‘s complete nonsense. Because, first of all, there‘s no way the brain could be that complicated. The design of the brain is in the genome. The genome — well... it‘s 800 million bytes. Well, back up and take a look at that. It‘s 3 billion base pairs, 6 billion bits, 800 million bytes before compression — but it‘s replete with redundancies. Lengthy sequences like ALU are repeated hundreds of thousands of times. In The Singularity is Near, I show that if you apply lossless compression, you get down to about 50 million bytes. About half of that is the brain, so that‘s about 25 million bytes. That‘s about a million lines of code. That‘s one derivation. You could also look at the amount of complexity that appears to be necessary to perform functional simulations of different brain regions. You actually get about the same answer, about a million lines of code. So with two very different methods, you come up with a similar order of magnitude. There just isn‘t trillions of lines of code — of complexity — in the design of the brain. There is trillions, or even thousands of trillions of bytes of information, but that‘s not complexity because there‘s massive redundancy.

For instance, the cerebellum, which comprises half the neurons in the brain and does some of our skill formation, has one module repeated 10 billion times with some random variation with each repetition within certain constraints. And there are only a few genes that describe the wiring of the cerebellum that comprise a few tens of thousands of bytes of design information. As we learn skills like catching a fly ball — then it gets filled up with trillions of bytes of information. But just like we don‘t need trillions of bytes of design information to design a trillion-byte memory system, there are massive redundancies and repetition and a certain amount of randomness in the implementation of the brain. It‘s a probabilistic fractal. If you look at the Mandelbrot set, it is an exquisitely complex design.

SO: So you‘re saying the initial intelligence that passes the Turing test is likely to be a reverse-engineered brain, as opposed to a software architecture that‘s based on weighted probabilistic analysis, genetic algorithms, and so forth?

RK: I would put it differently. We have a toolkit of AI techniques now. I actually don‘t draw that sharp a distinction between narrow AI techniques and AGI techniques. I mean, you can list them — Markov models, different forms of neural nets and genetic algorithms, logic systems, search algorithms, learning algorithms. These are techniques. Now, they go by the label AGI. We‘re going to add to that tool kit as we learn how the human brain does it. And then, with more and more powerful hardware, we‘ll be able to put together very powerful systems.

My vision is that all the different avenues — studying individual neurons, studying brain wiring, studying brain performance, simulating the brain either by doing neuron-by-neuron simulations or functional simulations — and then, all the AI work that has nothing to do with direct emulation of the brain — it‘s all helping. And we get from here to there through thousands of little steps like that, not through one grand leap.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 04:58 PM
Quote:
Originally Posted by JammyDodga
Actually, that's exactly what rk is proposing.
So just to clarify, I'm disagreeing with you on this.

The purpose of reverse engineering the human brain is not to allow us to engineer one. It's to understand it so we can create more efficient methods of AI.

I think RK believes we will create AI before we finish reverse engineering the human brain.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 06:05 PM
If anyone is interested in reading a nice critique of the singularity crowd by someone with equally bona fide tech credentials as Kurzweil, you should read One Half a Manifesto by Jaron Lanier and the resulting discussion.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 06:14 PM
Also, I think it's fairly illustrating, of his appetite for crack-pottery, that Kurzweil authored a medical book with a licensed? homeopath.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
08-03-2010 , 07:19 PM
Quote:
Originally Posted by Hardball47
This assumes the reductionist, materialist approach to the brain, and says that the brain is nothing but an organic computer. There's so much that we don't know about the brain; it boggles me why people say things like computers doing eventually what the human brain can do.
Human brains exist and are made of atoms. Unless they are magic, it seems like thinking of them as a type of computer makes sense.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote

      
m