"The Singularity Is Near" by Ray Kurzweil, How Close??
08-03-2010
, 09:13 AM
Quote:
Nobody has given any reasons why the singularity is remotely possible other than "cuz Kurzweil said". If that is all it takes for you to ramp up a probability from 0 to 1 obv people are going to laugh at you. If you think the idea has merits independent of kurzweil, you can argue them. But if not you have just elevated him to a status on par with cult leaders while making some pretty basic probability errors to boot.
08-03-2010
, 09:54 AM
Quote:
Also for what it's worth futurists and science fiction writers, though they tend to miss the unforeseeable major technological breakthroughs of the future, also tend to extrapolate far too aggressively from the known technological breakthroughs of their recent past. (For example consult any 50's sci-fi writer on the state of space travel by the year 2010.)
Kurtzweil feels like he's operating in this vein to me.
Kurtzweil feels like he's operating in this vein to me.
I'm surprised at the number of people who think that the brain has some non physical properties that make it impossible to simulate. Do people really believe that the brain has some magical properties which mean that a simulated a brain (scanning a brain into a computer using an imaging device then simulating all the particles with a physics engine) would be less intelligent than a real brain?
08-03-2010
, 10:12 AM
Quote:
I'm surprised at the number of people who think that the brain has some non physical properties that make it impossible to simulate. Do people really believe that the brain has some magical properties which mean that a simulated a brain (scanning a brain into a computer using an imaging device then simulating all the particles with a physics engine) would be less intelligent than a real brain?
Last edited by plaaynde; 08-03-2010 at 10:31 AM.
08-03-2010
, 12:48 PM
I think what many people doubt is that consciousness is something as simple as finding the right configuration of simulated neurons. Part of what keeps me doubting the likelihood that brute force computational power will result in sentient machines is that human consciousness did not evolve form some overload of mental powers.
Speaking generally, because I consider the issue to be far from clear or proven:
Consciousness seems most likely to be an emergent property of life that evolved and provided an advantage to species that possessed it. Whatever it is, it would seem mammals have it. Maybe birds. All vertebrates possibly to some degree. Maybe cephalopods.
Do bacteria? Worms? Jellyfish? Starfish? Not so sure. Don't think so.
In any event, I don't think the prerequisite is massive amounts of computational power. I think it is something else.
Not magical or mystical. But structural and emergent.
And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).
I think what might happen is that we create these supercomputers that we think have all the tools they need to generate consciousness, but nothing comes of it.
You can have all the parts you need for an airplane, but that doesn't mean you have a machine that can fly unless and until someone figures out how to put them together properly.
08-03-2010
, 01:00 PM
Quote:
And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).
This is only one potential way to tackle the consciousness problem, btw.
Last edited by plaaynde; 08-03-2010 at 01:28 PM.
08-03-2010
, 01:45 PM
Biological beings started with urges (eat, reproduce) and rudimentary sensory input (light, temperature, chemical detection) and in the pursuit of these urges evolved intelligence and consciousness.
Machines will start with some intelligence and sensory input and need to develop urges and consciousness.
Possibly some type of simulated competitive environment would encourage those developments -- although we should probably be careful what primal urges we program them with...
08-03-2010
, 02:02 PM
Quote:
I don't think anyone has suggested that the brain has non physical properties. This is SMP, not RGT.
I think what many people doubt is that consciousness is something as simple as finding the right configuration of simulated neurons. Part of what keeps me doubting the likelihood that brute force computational power will result in sentient machines is that human consciousness did not evolve form some overload of mental powers.
Speaking generally, because I consider the issue to be far from clear or proven:
Consciousness seems most likely to be an emergent property of life that evolved and provided an advantage to species that possessed it. Whatever it is, it would seem mammals have it. Maybe birds. All vertebrates possibly to some degree. Maybe cephalopods.
Do bacteria? Worms? Jellyfish? Starfish? Not so sure. Don't think so.
In any event, I don't think the prerequisite is massive amounts of computational power. I think it is something else.
Not magical or mystical. But structural and emergent.
And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).
I think what might happen is that we create these supercomputers that we think have all the tools they need to generate consciousness, but nothing comes of it.
You can have all the parts you need for an airplane, but that doesn't mean you have a machine that can fly unless and until someone figures out how to put them together properly.
I think what many people doubt is that consciousness is something as simple as finding the right configuration of simulated neurons. Part of what keeps me doubting the likelihood that brute force computational power will result in sentient machines is that human consciousness did not evolve form some overload of mental powers.
Speaking generally, because I consider the issue to be far from clear or proven:
Consciousness seems most likely to be an emergent property of life that evolved and provided an advantage to species that possessed it. Whatever it is, it would seem mammals have it. Maybe birds. All vertebrates possibly to some degree. Maybe cephalopods.
Do bacteria? Worms? Jellyfish? Starfish? Not so sure. Don't think so.
In any event, I don't think the prerequisite is massive amounts of computational power. I think it is something else.
Not magical or mystical. But structural and emergent.
And that's what we don't understand. And the doubts about our ability to create an artificial consciousness are largely (1) we don't understand what it is we are trying to create and (2) there seems to be no force or motive for it to arrive on its own (e.g., if computers start programming themselves WHAT would cause them to develop consciousness? -- they are not in an evolutionary environment).
I think what might happen is that we create these supercomputers that we think have all the tools they need to generate consciousness, but nothing comes of it.
You can have all the parts you need for an airplane, but that doesn't mean you have a machine that can fly unless and until someone figures out how to put them together properly.
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
08-03-2010
, 02:13 PM
And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).
Also, has it been proven that cognition and/or consciousness is entirely in the brain and functions without sensory input or supporting apparatus (e.g., endocrine system)?
The 'just scan the brain' idea sounds a bit too Star Trekky to me.
08-03-2010
, 02:14 PM
Quote:
I can't see how consciousness is relevant. If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human. It doesn't matter if it's conscious or not becaues either way it'd still make the same decisions because all the particles in the brain would be simulated in the exact same way. Consciousness is not some magical force that would change the path of some particles.
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
08-03-2010
, 02:15 PM
Try it this way:
You don't have to understand how a car is made to drive one, but you sure as heck need to know how a car is made to make one.
08-03-2010
, 02:35 PM
Join Date: May 2010
Posts: 365
Quote:
I can't see how consciousness is relevant. If we create an AI by using a scanned brain of an intelligent human, it would make all the same decisions as that human. It doesn't matter if it's conscious or not becaues either way it'd still make the same decisions because all the particles in the brain would be simulated in the exact same way. Consciousness is not some magical force that would change the path of some particles.
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
I don't need to understand electromagnetism to make a magnet. I don't believe I need to understand what consciousness is or what creates it in order to create an intelligent AI.
I think that advances in neuroimaging and developmental biology are going to be the bottlenecks for AI research (assuming Moore's law holds). I think we can block-box-brute-force if we can collect enough data on the organization of the brain / characteristics of neurons + glia, and we can ignore the hard problem of neuroscience (and dodge issues of complexity / emergence). This is a dramatic departure from the Strong AIers paradigm of 70s-80s, who asserted we could get create consciousness without understanding the biology at all - it appears Kargenath and I intend for the opposite to happen (consciousness without requiring an understanding of emergence).
I think a lot of the controversy over this topic exists because of an unusual extension of this argument - namely, that any sort of computer (including silly computers, like systems of pipes) could run this brain-inspired simulation. It necessitates the following statement: "the computational device running the consciousness program is not special - it could be a brain, a series of tubes, etc."
I want to point out that this is not jb9's argument (although it has been brought up in the thread, see references to Searle).
08-03-2010
, 02:46 PM
Join Date: May 2010
Posts: 365
Quote:
Why? Do identical twins make all the same decisions?
And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).
Also, has it been proven that cognition and/or consciousness is entirely in the brain and functions without sensory input or supporting apparatus (e.g., endocrine system)?
The 'just scan the brain' idea sounds a bit too Star Trekky to me.
And I'm pretty sure that scanning a brain at the level of detail that would be required to effectively replicate it may be impossible (due to uncertainty principle if nothing else).
Also, has it been proven that cognition and/or consciousness is entirely in the brain and functions without sensory input or supporting apparatus (e.g., endocrine system)?
The 'just scan the brain' idea sounds a bit too Star Trekky to me.
With regard to the uncertainty principle - yes, this would matter if you wanted a replica which had every particle in an identical position, with an identical amount of energy, with identical momentums, etc. The uncertainty principle might be a decent explanation for why you can't have a total duplicate of a person (including his or her internal state of mind at the moment).
After that, I doubt it'd be important. You can resolve the location of neurotransmitters, the apparatuses making them, the location of synapses, etc etc etc. Considering that there is thermal / mechanical noise perturbing all of those systems constantly, yet they still work, I doubt that error on the order of Planck's constant is going to matter.
You might be able to make a duplicate of a person including memories and other science fictiony things. This is unnecessary - we're mainly concerned with the connectivity of the brain.
We don't even have to start at the human-level. We could begin with some very simple models, and possibly use developmental neurobiology to bridge the gap to bigger, brainier beasts.
Edit: forgot to address endocrine system / sensory data.
How is this any different from simulating neurons? If it matters, it's just an extra technical problem with an identical solution. We'll need to image the tiny adrenaline glands and simulate them too - no big deal.
Frankly, I'd expect the peripheral nervous system / sensation data, etc to be a cakewalk to simulate relative to any structure higher than the spinal cord. Almost all the endocrine system is the brain anyway...things like simulating diffusion / destruction of relevant hormones is so easy to simulate that we have decent models right now which require a comically high low level of resolution. (you can get highly accurate models using just simple pharmacokinetic differential equation models)
08-03-2010
, 02:54 PM
P.S. And for what it is worth, I think the 'scanning the human brain/body' approach is the wrong one for creating machine intelligence. I think any machine intelligence will have its own architecture and developmental path. But this is just gut feeling, not science or philosophy.
08-03-2010
, 02:59 PM
Quote:
Try it this way:
You don't have to understand how a car is made to drive one, but you sure as heck need to know how a car is made to make one.
You don't have to understand how a car is made to drive one, but you sure as heck need to know how a car is made to make one.
08-03-2010
, 03:02 PM
So, yeah, knowing how a brain works is of some importance, imo.
(and we already know how to make brains... have sex.)
08-03-2010
, 03:02 PM
Join Date: May 2010
Posts: 365
Quote:
Then you don't understand Kurzweil's position. He thinks that such a thing WOULD be conscious. That's the definition of "strong AI": thinking/consciousness is just sufficiently sophisticated computation. The issue is not whether we could build a non-thinking automaton...it's whether we could construct an artificial intelligence.
Here is an example of how absurd it gets:
Searle addresses the following scenario with the silliest conclusion:
Imagine a robot with an electronic brain which is a perfect simulation of the physical neurons / synapses of the human nervous system. Give the robot a human-looking body which is indistinguishable from you or me, and has all the physical capacities of a human to interact with the world. When you speak with this robot, it appears identical in every way to a conscious human - and it passes every measurable test of consciousness identically to a human would. If you didn't know it was a robot, you'd claim it's conscious.
Is this conscious? Searle says 'no!'
He says that even though we'd say it's conscious if we didn't know any better, because we know it has an electronic brain, we know it's not conscious, because gee whiz, I'm Searle, and I'm allowed to say that.
Read his response to the "brain simulator reply" and the "combined response" (which addresses the robot indistinguishable from humans).
Also note how he points out that Strong AIers claim they can understand AI without understanding biology, and this requires biology.
He designed his argument to show that strong AIers who claim they can "generate consciousness without understanding how the brain works" were wrong, and rails against 'special dualists' who refuse to believe that the physical / chemical properties of the brain are important for consciousness.
His later arguments (post 1980) are absurd meaningless appeals to linguistic aesthetics - making statements like "you can't simulate burning!" He is a genius dinosaur who made a compelling argument against a topic we are not discussing.
08-03-2010
, 03:07 PM
Join Date: May 2010
Posts: 365
From a necessary versus sufficient standpoint we can bypass understanding. This lets us substitute a hard, math-intensive, thinky problem with simple data collection, which biologists are good at.
Just ask the Chinese pirates manufacturing cloned networking devices whether they understand queuing theory.
Edit: I posted this before I saw Karganeth's reply, but I'm quoting it for truth:
08-03-2010
, 03:18 PM
Pointing to somebody making a car by looking at a car or its blueprints doesn't make your argument valid. Nobody is proposing making a replica of a brain by looking at one, and its blueprints do not exist.
08-03-2010
, 03:21 PM
Join Date: May 2010
Posts: 365
Quote:
Obviously depends on what level of detail 'matters'. If all you need is cellular, it seems theoretically possible. Molecular? Maybe. Quantum? No. Cellular + energy states? I don't know.
P.S. And for what it is worth, I think the 'scanning the human brain/body' approach is the wrong one for creating machine intelligence. I think any machine intelligence will have its own architecture and developmental path. But this is just gut feeling, not science or philosophy.
P.S. And for what it is worth, I think the 'scanning the human brain/body' approach is the wrong one for creating machine intelligence. I think any machine intelligence will have its own architecture and developmental path. But this is just gut feeling, not science or philosophy.
For instance, suppose for some reason a highly detailed characterization of the sodium/potassium pump is necessary. If our brain scan could show us a distribution of this, but provide no further data, we could simply detail the pump independently and substitute the data in.
Of course, it may be that a black-box model of the pump is sufficient (that is to say, there is no error incurred from substituting a simple mathematical characterization for the original molecular-details).
So yes, I wanna pump dat black-box wit' my salt-y fluids.
I realize that a lot of this data doesn't exist yet (for instance, the 3D protein structure of the pump is unknown), but it seems inconceivable to me that we won't know it in a few decades, by hook or by crook (aka measurement or simulation).
It may sound stupid, but a fairly blind "substitute this structure with a response-curve generated from inputs" may be able to solve all of these problems without a deep understanding of the underlying mechanisms.
08-03-2010
, 03:41 PM
Actually, that's exactly what rk is proposing.
08-03-2010
, 04:56 PM
From Kurzweil Himself:
Quote:
ARTIFICIAL INTELLIGENCE & REVERSE ENGINEERING THE BRAIN
RK: I‘m working on a book called How the Mind Works and How to Build One. It‘s mostly about the brain, but the reason I call it the mind rather than the brain is to bring in these issues of consciousness. A brain becomes a mind as it melds with its own body, and in fact, our sort of circle of identity goes beyond our body. We certainly interact with our environment. It‘s not a clear distinction between who we are and what we are not.
My concept of the value of reverse engineering the human brain is not that you just simulate a brain in a sort of mechanistic way, without trying to understand what is going on. David Chalmers says he doesn‘t think this is a fruitful direction. And I would agree that just simulating a brain... mindlessly, so to speak... that‘s not going to get you far enough. The purpose of reverse engineering the human brain is to understand the basic principles of intelligence.
Once you have a simulation working, you can start modifying things. Certain things may not matter. Some things may be very critical. So you learn what‘s important. You learn the basic principles by which the human brain handles hierarchies and variance, properties of patterns, high-level features and so on. And it appears that the neocortex has this very uniform structure. If we learn those principles, we can then engineer them and amplify them and focus on them. That‘s engineering.
Now, a big evolutionary innovation with **** sapiens is that we have a bigger forehead so that we could fit a larger cortex. But it‘s still quite limited. And it‘s running on the substrate that transmits information from one part of the brain to another at a few hundred feet per second, which is a million times slower than electronics. The intra-neural connections compute at about 100 or 200 calculations per second, which is somewhere between 1,000,000 to 10,000,000 times slower than electronics. So if we can get past the substrates, we don‘t have to settle for a billion of these recognizers. We could have a trillion of them, or a thousand trillion. We could have many more hierarchal levels. We can design it to solve more difficult problems.
So that‘s really the purpose of reverse engineering the human brain. But there are other benefits. We‘ll get more insight into ourselves. We‘ll have better means for fixing the brain. Right now, we have vague psychiatric models as to what‘s going on in the brain of someone with bipolar disease or schizophrenia. We‘ll not only understand human function, we‘ll understand human dysfunction. We‘ll have better means of fixing those problems. And moreover we‘ll “fix” the limitation that we all have in thinking in a very small, slow, fairly-messily organized substrate. Of course, we have to be careful. What might seem like just a messy arbitrary complexity that evolution put in may in fact be part of some real principle that we don‘t understand at first.
I‘m not saying this is simple. But on the other hand, I had this debate with John Horgan, who wrote a critical article about my views in IEEE Spectrum. Horgan says that we would need trillions of lines of code to emulate the brain and that‘s far beyond the complexity that we‘ve shown we can handle in our software. The most sophisticated software programs are only tens of millions of lines of code. But that‘s complete nonsense. Because, first of all, there‘s no way the brain could be that complicated. The design of the brain is in the genome. The genome — well... it‘s 800 million bytes. Well, back up and take a look at that. It‘s 3 billion base pairs, 6 billion bits, 800 million bytes before compression — but it‘s replete with redundancies. Lengthy sequences like ALU are repeated hundreds of thousands of times. In The Singularity is Near, I show that if you apply lossless compression, you get down to about 50 million bytes. About half of that is the brain, so that‘s about 25 million bytes. That‘s about a million lines of code. That‘s one derivation. You could also look at the amount of complexity that appears to be necessary to perform functional simulations of different brain regions. You actually get about the same answer, about a million lines of code. So with two very different methods, you come up with a similar order of magnitude. There just isn‘t trillions of lines of code — of complexity — in the design of the brain. There is trillions, or even thousands of trillions of bytes of information, but that‘s not complexity because there‘s massive redundancy.
For instance, the cerebellum, which comprises half the neurons in the brain and does some of our skill formation, has one module repeated 10 billion times with some random variation with each repetition within certain constraints. And there are only a few genes that describe the wiring of the cerebellum that comprise a few tens of thousands of bytes of design information. As we learn skills like catching a fly ball — then it gets filled up with trillions of bytes of information. But just like we don‘t need trillions of bytes of design information to design a trillion-byte memory system, there are massive redundancies and repetition and a certain amount of randomness in the implementation of the brain. It‘s a probabilistic fractal. If you look at the Mandelbrot set, it is an exquisitely complex design.
SO: So you‘re saying the initial intelligence that passes the Turing test is likely to be a reverse-engineered brain, as opposed to a software architecture that‘s based on weighted probabilistic analysis, genetic algorithms, and so forth?
RK: I would put it differently. We have a toolkit of AI techniques now. I actually don‘t draw that sharp a distinction between narrow AI techniques and AGI techniques. I mean, you can list them — Markov models, different forms of neural nets and genetic algorithms, logic systems, search algorithms, learning algorithms. These are techniques. Now, they go by the label AGI. We‘re going to add to that tool kit as we learn how the human brain does it. And then, with more and more powerful hardware, we‘ll be able to put together very powerful systems.
My vision is that all the different avenues — studying individual neurons, studying brain wiring, studying brain performance, simulating the brain either by doing neuron-by-neuron simulations or functional simulations — and then, all the AI work that has nothing to do with direct emulation of the brain — it‘s all helping. And we get from here to there through thousands of little steps like that, not through one grand leap.
RK: I‘m working on a book called How the Mind Works and How to Build One. It‘s mostly about the brain, but the reason I call it the mind rather than the brain is to bring in these issues of consciousness. A brain becomes a mind as it melds with its own body, and in fact, our sort of circle of identity goes beyond our body. We certainly interact with our environment. It‘s not a clear distinction between who we are and what we are not.
My concept of the value of reverse engineering the human brain is not that you just simulate a brain in a sort of mechanistic way, without trying to understand what is going on. David Chalmers says he doesn‘t think this is a fruitful direction. And I would agree that just simulating a brain... mindlessly, so to speak... that‘s not going to get you far enough. The purpose of reverse engineering the human brain is to understand the basic principles of intelligence.
Once you have a simulation working, you can start modifying things. Certain things may not matter. Some things may be very critical. So you learn what‘s important. You learn the basic principles by which the human brain handles hierarchies and variance, properties of patterns, high-level features and so on. And it appears that the neocortex has this very uniform structure. If we learn those principles, we can then engineer them and amplify them and focus on them. That‘s engineering.
Now, a big evolutionary innovation with **** sapiens is that we have a bigger forehead so that we could fit a larger cortex. But it‘s still quite limited. And it‘s running on the substrate that transmits information from one part of the brain to another at a few hundred feet per second, which is a million times slower than electronics. The intra-neural connections compute at about 100 or 200 calculations per second, which is somewhere between 1,000,000 to 10,000,000 times slower than electronics. So if we can get past the substrates, we don‘t have to settle for a billion of these recognizers. We could have a trillion of them, or a thousand trillion. We could have many more hierarchal levels. We can design it to solve more difficult problems.
So that‘s really the purpose of reverse engineering the human brain. But there are other benefits. We‘ll get more insight into ourselves. We‘ll have better means for fixing the brain. Right now, we have vague psychiatric models as to what‘s going on in the brain of someone with bipolar disease or schizophrenia. We‘ll not only understand human function, we‘ll understand human dysfunction. We‘ll have better means of fixing those problems. And moreover we‘ll “fix” the limitation that we all have in thinking in a very small, slow, fairly-messily organized substrate. Of course, we have to be careful. What might seem like just a messy arbitrary complexity that evolution put in may in fact be part of some real principle that we don‘t understand at first.
I‘m not saying this is simple. But on the other hand, I had this debate with John Horgan, who wrote a critical article about my views in IEEE Spectrum. Horgan says that we would need trillions of lines of code to emulate the brain and that‘s far beyond the complexity that we‘ve shown we can handle in our software. The most sophisticated software programs are only tens of millions of lines of code. But that‘s complete nonsense. Because, first of all, there‘s no way the brain could be that complicated. The design of the brain is in the genome. The genome — well... it‘s 800 million bytes. Well, back up and take a look at that. It‘s 3 billion base pairs, 6 billion bits, 800 million bytes before compression — but it‘s replete with redundancies. Lengthy sequences like ALU are repeated hundreds of thousands of times. In The Singularity is Near, I show that if you apply lossless compression, you get down to about 50 million bytes. About half of that is the brain, so that‘s about 25 million bytes. That‘s about a million lines of code. That‘s one derivation. You could also look at the amount of complexity that appears to be necessary to perform functional simulations of different brain regions. You actually get about the same answer, about a million lines of code. So with two very different methods, you come up with a similar order of magnitude. There just isn‘t trillions of lines of code — of complexity — in the design of the brain. There is trillions, or even thousands of trillions of bytes of information, but that‘s not complexity because there‘s massive redundancy.
For instance, the cerebellum, which comprises half the neurons in the brain and does some of our skill formation, has one module repeated 10 billion times with some random variation with each repetition within certain constraints. And there are only a few genes that describe the wiring of the cerebellum that comprise a few tens of thousands of bytes of design information. As we learn skills like catching a fly ball — then it gets filled up with trillions of bytes of information. But just like we don‘t need trillions of bytes of design information to design a trillion-byte memory system, there are massive redundancies and repetition and a certain amount of randomness in the implementation of the brain. It‘s a probabilistic fractal. If you look at the Mandelbrot set, it is an exquisitely complex design.
SO: So you‘re saying the initial intelligence that passes the Turing test is likely to be a reverse-engineered brain, as opposed to a software architecture that‘s based on weighted probabilistic analysis, genetic algorithms, and so forth?
RK: I would put it differently. We have a toolkit of AI techniques now. I actually don‘t draw that sharp a distinction between narrow AI techniques and AGI techniques. I mean, you can list them — Markov models, different forms of neural nets and genetic algorithms, logic systems, search algorithms, learning algorithms. These are techniques. Now, they go by the label AGI. We‘re going to add to that tool kit as we learn how the human brain does it. And then, with more and more powerful hardware, we‘ll be able to put together very powerful systems.
My vision is that all the different avenues — studying individual neurons, studying brain wiring, studying brain performance, simulating the brain either by doing neuron-by-neuron simulations or functional simulations — and then, all the AI work that has nothing to do with direct emulation of the brain — it‘s all helping. And we get from here to there through thousands of little steps like that, not through one grand leap.
08-03-2010
, 04:58 PM
So just to clarify, I'm disagreeing with you on this.
The purpose of reverse engineering the human brain is not to allow us to engineer one. It's to understand it so we can create more efficient methods of AI.
I think RK believes we will create AI before we finish reverse engineering the human brain.
The purpose of reverse engineering the human brain is not to allow us to engineer one. It's to understand it so we can create more efficient methods of AI.
I think RK believes we will create AI before we finish reverse engineering the human brain.
08-03-2010
, 06:05 PM
If anyone is interested in reading a nice critique of the singularity crowd by someone with equally bona fide tech credentials as Kurzweil, you should read One Half a Manifesto by Jaron Lanier and the resulting discussion.
08-03-2010
, 06:14 PM
Also, I think it's fairly illustrating, of his appetite for crack-pottery, that Kurzweil authored a medical book with a licensed? homeopath.
08-03-2010
, 07:19 PM
Quote:
This assumes the reductionist, materialist approach to the brain, and says that the brain is nothing but an organic computer. There's so much that we don't know about the brain; it boggles me why people say things like computers doing eventually what the human brain can do.
Feedback is used for internal purposes. LEARN MORE
Powered by:
Hand2Note
Copyright ©2008-2022, Hand2Note Interactive LTD